Annual Report Foundations of Hybrid and Embedded Systems and Software Nsf/itr Project – Award Number: Ccr-00225610 University of California at Berkeley

ثبت نشده
چکیده

s for key publications representing project findings during this reporting period, are provided here. These are listed alphabetically by first author. A complete list of publications that appeared in print during this reporting period is given in section 3 below, including publications representing findings that were reported in the previous annual report. [1] A Stochastic Approximation for Hybrid Systems A. Abate, A. Ames, S. Sastry, In Proc. American Control Conference, (in publication), Portland, June 2005. Abstract: This paper introduces a method for approximating the dynamics of deterministic hybrid systems. Within this setting, we shall consider jump conditions that are characterized by spatial guards. After defining proper penalty functions along these deterministic guards, corresponding probabilistic intensities are introduced and the deterministic dynamics are approximated by the stochastic evolution of a continuous-time Markov process. We will illustrate how the definition of the stochastic barriers can avoid ill-posed events such as “grazing,” and show how the probabilistic guards can be helpful in addressing the problem of event detection. Furthermore, this method represents a very general technique for handling Zeno phenomena; it provides a universal way to regularize a hybrid system. Simulations will show that the stochastic approximation of a hybrid system is accurate, while being able to handle “pathological cases.” Finally, further generalizations of this approach are motivated and discussed. This paper introduces a method for approximating the dynamics of deterministic hybrid systems. Within this setting, we shall consider jump conditions that are characterized by spatial guards. After defining proper penalty functions along these deterministic guards, corresponding probabilistic intensities are introduced and the deterministic dynamics are approximated by the stochastic evolution of a continuous-time Markov process. We will illustrate how the definition of the stochastic barriers can avoid ill-posed events such as “grazing,” and show how the probabilistic guards can be helpful in addressing the problem of event detection. Furthermore, this method represents a very general technique for handling Zeno phenomena; it provides a universal way to regularize a hybrid system. Simulations will show that the stochastic approximation of a hybrid system is accurate, while being able to handle “pathological cases.” Finally, further generalizations of this approach are motivated and discussed. [2] New Congestion Control Schemes over Wireless Networks: Stability Analysis Alessandro Abate, Minghua Chen, Avideh Zakhor, Sankar Sastry, submitted to IFAC 05. Abstract: The objective of this work is to introduce two original flow control schemes for wireless networks. The mathematical underpinnings lie on the recently-developed congestion control models for Transmission-Control-Protocol(TCP)-like schemes; more precisely, the model proposed by Kelly for the wired case is taken as a template, and properly extended to the more involved wireless setting. We introduce two ways to modify a part of the model; the first is through a static law, and the second via a dynamic one. In both cases, we prove the global stability of the schemes, and present a convergence rate study and a stochastic analysis. The objective of this work is to introduce two original flow control schemes for wireless networks. The mathematical underpinnings lie on the recently-developed congestion control models for Transmission-Control-Protocol(TCP)-like schemes; more precisely, the model proposed by Kelly for the wired case is taken as a template, and properly extended to the more involved wireless setting. We introduce two ways to modify a part of the model; the first is through a static law, and the second via a dynamic one. In both cases, we prove the global stability of the schemes, and present a convergence rate study and a stochastic analysis. [3] New Congestion Control Schemes over Wireless Networks: Delay Sensitivity Analys is and Simulations Foundations of Hybrid and Embedded Systems and Software 30 A. Abate, M. Chen, S. Sastry. In Proc. International Federation of Automatic Control World Congress, (in publication), Prague, July 2005. Abstract: This paper proposes two new congestion control schemes for packet switched wireless networks. Starting from the seminal work of Kelly (Kelly et al., Dec 1999), we consider the decentralized flow control model for a TCP-like scheme and extend it to the wireless scenario. Motivated by the presence of channel errors, we introduce updates in the part of the model representing the number of connections the user establishes with the network; this assumption has important physical interpretation. Specifically, we propose two updates: the ̄rst is static, while the second evolves dynamically. The global stability of both schemes has been proved; also, a stochastic stability study and the rate of convergence of the two algorithms have been investigated. This paper focuses on the delay sensitivity of both schemes. A stability condition on the parameters of the system is introduced and proved. Moreover, some deeper insight on the structure of the oscillations of the system is attained. To support the theoretical results, simulations are provided. This paper proposes two new congestion control schemes for packet switched wireless networks. Starting from the seminal work of Kelly (Kelly et al., Dec 1999), we consider the decentralized flow control model for a TCP-like scheme and extend it to the wireless scenario. Motivated by the presence of channel errors, we introduce updates in the part of the model representing the number of connections the user establishes with the network; this assumption has important physical interpretation. Specifically, we propose two updates: the ̄rst is static, while the second evolves dynamically. The global stability of both schemes has been proved; also, a stochastic stability study and the rate of convergence of the two algorithms have been investigated. This paper focuses on the delay sensitivity of both schemes. A stability condition on the parameters of the system is introduced and proved. Moreover, some deeper insight on the structure of the oscillations of the system is attained. To support the theoretical results, simulations are provided. [4] Robust Model Predictive Control through Adjustable Variables: An Application to Path Planning A. Abate, L. El Ghaoui, In Proc. International Conference on Decision and Control, Atlantis, Bahamas, December 2004. Abstract: Robustness in Model Predictive Control (MPC) is the main focus of this work. After a definition of the conceptual framework and of the problem’s setting, we will analyze how a technique developed for studying robustness in Convex Optimization can be applied to address the problem of robustness in the MPC case. Therefore, exploiting this relationship between Control and Optimization, we will tackle robustness issues for the first setting through methods developed in the second framework. Proofs for our results are included. As an application of this Robust MPC result, we shall consider a Path Planning problem and discuss some simulations thereabout. Robustness in Model Predictive Control (MPC) is the main focus of this work. After a definition of the conceptual framework and of the problem’s setting, we will analyze how a technique developed for studying robustness in Convex Optimization can be applied to address the problem of robustness in the MPC case. Therefore, exploiting this relationship between Control and Optimization, we will tackle robustness issues for the first setting through methods developed in the second framework. Proofs for our results are included. As an application of this Robust MPC result, we shall consider a Path Planning problem and discuss some simulations thereabout. [5] A Stability Criterion for Stochastic Hybrid Systems A. Abate, L. Shi, S. Simic, S. Sastry, In Proc. International Symposium of Mathematical Theory of Networks and Systems, Leuven, July 2004. Abstract: This paper investigates the notion of stability for Stochastic Hybrid Systems. The uncertainty is introduced in the discrete jumps between the domains, as if we had an underlying Markov Chain. The jumps happen every fixed time T; moreover, a result is given for the case of probabilistic dwelling times inside each domain. Unlike the more classical Hybrid Sys-stems setting, the guards here are time-related, rather than spacerelated. We shall focus on vector fields describing input-less dynamical systems.Clearly, the uncertainty intrinsic to the model forces to introduce a fairly new definition of stability, which can be related to the classical Lyapunov one though. Proofs and simulations for our results are provided, as well as a motivational example from finance. This paper investigates the notion of stability for Stochastic Hybrid Systems. The uncertainty is introduced in the discrete jumps between the domains, as if we had an underlying Markov Chain. The jumps happen every fixed time T; moreover, a result is given for the case of probabilistic dwelling times inside each domain. Unlike the more classical Hybrid Sys-stems setting, the guards here are time-related, rather than spacerelated. We shall focus on vector fields describing input-less dynamical systems.Clearly, the uncertainty intrinsic to the model forces to introduce a fairly new definition of stability, which can be related to the classical Lyapunov one though. Proofs and simulations for our results are provided, as well as a motivational example from finance. [6] Hierarchical Online Control Design for Autonomous Resource Management in Advanced Life Support Systems Foundations of Hybrid and Embedded Systems and Software 31 S. Abdelwahed, J. Wu, G. Biswas, E. J. Manders, paper no. 2005-01-2965, International Conference on Environmental Systems and European Symposium on Space Environmental Control Systems, (to appear), Rome Italy, July 11—14, 2005. Abstract: This paper presents a distributed, hierarchical control scheme for autonomous resource management in complex embedded systems that can handle dynamic changes in resource constraints and operational requirements. The developed hierarchical control structure handles the interactions between subsystem and system-level controllers, A global coordinator at the root of the hierarchy ensures resource requirements for the duration of the mission are not violated, We have applied this approach to design a threetier hierarchical controller for the operation of a lunar habitat that includes a number of interacting life support components. This paper presents a distributed, hierarchical control scheme for autonomous resource management in complex embedded systems that can handle dynamic changes in resource constraints and operational requirements. The developed hierarchical control structure handles the interactions between subsystem and system-level controllers, A global coordinator at the root of the hierarchy ensures resource requirements for the duration of the mission are not violated, We have applied this approach to design a threetier hierarchical controller for the operation of a lunar habitat that includes a number of interacting life support components. [7] Online Fault-Adaptive Control for Efficient Resource Management in Advanced Life Support Systems S. Abdelwahed, J. Wu, G. Biswas, J. Ramirez, E.J. Manders, Habitation: International Journal of Human Support Research, vol. 10, no. 2, pp. 105-115, 2005. Abstract: This paper presents the design and implementation of a controller scheme for efficient resource management in Advanced Life Support Systems. In the proposed approach, a switching hybrid system model is used to represent the dynamics of the system components and their interactions. The operational specifications for the controller are represented as a utility function, and the corresponding resource management problem is formulated as a safety control problem. A limited-horizon online supervisory controller is used for this purpose. The online controller explores a limited region of the state-space of the system at each time step and uses the utility function to decide on the best action. The feasibility and accuracy of the online algorithm can be assessed at design time. We demonstrate the effectiveness of the scheme by running a set of experiments on the Reverse Osmosis (RO) subsystem of the Water Recovery System (WRS). This paper presents the design and implementation of a controller scheme for efficient resource management in Advanced Life Support Systems. In the proposed approach, a switching hybrid system model is used to represent the dynamics of the system components and their interactions. The operational specifications for the controller are represented as a utility function, and the corresponding resource management problem is formulated as a safety control problem. A limited-horizon online supervisory controller is used for this purpose. The online controller explores a limited region of the state-space of the system at each time step and uses the utility function to decide on the best action. The feasibility and accuracy of the online algorithm can be assessed at design time. We demonstrate the effectiveness of the scheme by running a set of experiments on the Reverse Osmosis (RO) subsystem of the Water Recovery System (WRS). [8] Semantic Translation of Simulink/Stateflow models to Hybrid Automata using Graph Transformations A. Agrawal, Gy. Simon, G. Karsai, Electronic Notes in Theoretical Computer Science, In Proc. Workshop on Graph Transformation and Visual Modeling Techniques (GT-VMT 2004), Volume 109, pp. 43-56. Abstract: Embedded systems are often modeled using Matlab’s Simulink and Stateflow (MSS), to simulate plant and controller behavior but these models lack support for formal verification. On the other hand verification techniques and tools do exist for models based on the notion of Hybrid Automata (HA) but there are no tools that can convert Simulink/Stateflow models into their semantically equivalent Hybrid Automata models. This paper describes a translation algorithm that converts a well-defined subset of the MSS modeling language into an equivalent hybrid automata. The translation has been Embedded systems are often modeled using Matlab’s Simulink and Stateflow (MSS), to simulate plant and controller behavior but these models lack support for formal verification. On the other hand verification techniques and tools do exist for models based on the notion of Hybrid Automata (HA) but there are no tools that can convert Simulink/Stateflow models into their semantically equivalent Hybrid Automata models. This paper describes a translation algorithm that converts a well-defined subset of the MSS modeling language into an equivalent hybrid automata. The translation has been Foundations of Hybrid and Embedded Systems and Software 32 specified and implemented using a metamodel-based graph transformation tool. The translation process allows semantic interoperability between the industry-standard MSS tools and the new verification tools developed in the research community. [9] Reusable Idioms and Patterns in Graph Transformation Languages A. Agrawal, A. Vizhanyo, Z. Kalmar, F. Shi, A. Narayanan, G. Karsai, International Workshop on Graph-Based Tools, In Proc. 2004 International Conference on Graph Transformations, Rome, Italy, October, 2004. Abstract: Software engineering tools based on Graph Transformation techniques are becoming available, but their practical applicability is somewhat reduced by the lack of idioms and design patterns. Idioms and design patterns provide prototypical solutions for recurring design problems in software engineering, but their use can be easily extended into software development using graph transformation systems. In this paper we briefly present a simple graph transformation language: GReAT, and show how typical design problems that arise in the context of model transformations can be solved using its constructs. These solutions are similar to software design patterns, and intend to serve as the starting point for a more complete collection. Software engineering tools based on Graph Transformation techniques are becoming available, but their practical applicability is somewhat reduced by the lack of idioms and design patterns. Idioms and design patterns provide prototypical solutions for recurring design problems in software engineering, but their use can be easily extended into software development using graph transformation systems. In this paper we briefly present a simple graph transformation language: GReAT, and show how typical design problems that arise in the context of model transformations can be solved using its constructs. These solutions are similar to software design patterns, and intend to serve as the starting point for a more complete collection. [10] Blowing Up Affine Hybrid Systems A. D. Ames, S. Sastry, 43rd IEEE Conference on Decision and Control 2004 (CDC'04), Atlantis, Paradise Island, Bahamas, Dec. 2004, pp. 473-478. Abstract: In this paper we construct the "blow up" of an affine hybrid system H, i.e., a new affine hybrid system Bl(H) in which H is embedded, that does not exhibit Zeno behavior. We show the existence of a bijection U between periodic orbits and equilibrium points of H and Bl(H) that preserves stability; we refer to this property as P-stability equivalence. In this paper we construct the "blow up" of an affine hybrid system H, i.e., a new affine hybrid system Bl(H) in which H is embedded, that does not exhibit Zeno behavior. We show the existence of a bijection U between periodic orbits and equilibrium points of H and Bl(H) that preserves stability; we refer to this property as P-stability equivalence. [11] Characterization of Zeno Behavior in Hybrid Systems using Homological Methods A. D. Ames, S. Sastry, 24th American Control Conference 2005 (ACC’05), (in publication), Portland, OR, 2005. Abstract: It is possible to associate to a hybrid system a single topological space--its underlying topological space. Simultaneously, every hybrid system has a graph as its indexing object--its underlying graph. Here we discuss the relationship between the underlying topological space of a hybrid system, its underlying graph and Zeno behavior. When each domain is contractible and the reset maps are homotopic to the identity map, the homology of the underlying topological space is isomorphic to the homology of the underlying graph; the nonexistence of Zeno is implied when the first homology is trivial. Moreover, the first homology is trivial when the null space of the incidence matrix is trivial. The result is an easy way to verify the nonexistence of Zeno behavior. It is possible to associate to a hybrid system a single topological space--its underlying topological space. Simultaneously, every hybrid system has a graph as its indexing object--its underlying graph. Here we discuss the relationship between the underlying topological space of a hybrid system, its underlying graph and Zeno behavior. When each domain is contractible and the reset maps are homotopic to the identity map, the homology of the underlying topological space is isomorphic to the homology of the underlying graph; the nonexistence of Zeno is implied when the first homology is trivial. Moreover, the first homology is trivial when the null space of the incidence matrix is trivial. The result is an easy way to verify the nonexistence of Zeno behavior. Foundations of Hybrid and Embedded Systems and Software 33 [12] A Homology Theory for Hybrid Systems: Hybrid Homology A. D. Ames, S. Sastry, In Proc. Hybrid Systems: Computation and Control, 8th International Workshop, Zurich, Switzerland, March 9-11, M. Morari and L. Thiele, eds., vol. 3414 of Lecture Notes in Computer Science, Springer-Verlag, pp. 86-102, 2005. Abstract: By transferring the theory of hybrid systems to a categorical framework, it is possible to develop a homology theory for hybrid systems: hybrid homology. This is achieved by considering the underlying ``space" of a hybrid system---its hybrid space or H-space. The homotopy colimit can be applied to this H-space to obtain a single topological space; the hybrid homology of an H-space is the homology of this space. The result is a spectral sequence converging to the hybrid homology of an H-space, providing a concrete way to compute this homology. Moreover, the hybrid homology of the Hspace underlying a hybrid system gives useful information about the behavior of this system: the vanishing of the first hybrid homology of this H-space---when it is contractible and finite---implies that this hybrid system is not Zeno. By transferring the theory of hybrid systems to a categorical framework, it is possible to develop a homology theory for hybrid systems: hybrid homology. This is achieved by considering the underlying ``space" of a hybrid system---its hybrid space or H-space. The homotopy colimit can be applied to this H-space to obtain a single topological space; the hybrid homology of an H-space is the homology of this space. The result is a spectral sequence converging to the hybrid homology of an H-space, providing a concrete way to compute this homology. Moreover, the hybrid homology of the Hspace underlying a hybrid system gives useful information about the behavior of this system: the vanishing of the first hybrid homology of this H-space---when it is contractible and finite---implies that this hybrid system is not Zeno. [13] Sufficient Conditions for the Existence of Zeno Behavior A. D. Ames, S. Sastry, 44th IEEE Conference on Decision and Control and European Control Conference ECC 2005 (CDC-ECC'05), (submitted for publication), Seville, Spain, Dec., 12--15, 2005. Abstract: In this paper, sufficient conditions for the existence of Zeno behavior in a class of hybrid systems is given; these are the first sufficient conditions on Zeno of which the authors are aware for hybrid systems with nontrivial dynamics. This is achieved by considering a class of hybrid systems termed diagonal first quadrant (DFQ) hybrid systems. When the underlying graph of a DFQ hybrid system has a cycle, we can construct an infinite execution for this system when the vector fields on each domain satisfy certain assumptions. To this execution, we can associate a single discrete time dynamical system that describes its continuous evolution. Therefore, we reduce the study of executions of DFQ hybrid systems to the study of a single discrete time dynamical system. We obtain sufficient conditions for the existence of Zeno by determining when this discrete time dynamical system is exponentially stable. In this paper, sufficient conditions for the existence of Zeno behavior in a class of hybrid systems is given; these are the first sufficient conditions on Zeno of which the authors are aware for hybrid systems with nontrivial dynamics. This is achieved by considering a class of hybrid systems termed diagonal first quadrant (DFQ) hybrid systems. When the underlying graph of a DFQ hybrid system has a cycle, we can construct an infinite execution for this system when the vector fields on each domain satisfy certain assumptions. To this execution, we can associate a single discrete time dynamical system that describes its continuous evolution. Therefore, we reduce the study of executions of DFQ hybrid systems to the study of a single discrete time dynamical system. We obtain sufficient conditions for the existence of Zeno by determining when this discrete time dynamical system is exponentially stable. [14] A Decentralized Approach to Sound Source Localization with Sensor Networks I. Amundson, P. Schmidt, K. D. Frampton, Presented at the 2004 ASME International Mechanical Engineering Conference and Exposition, Anaheim CA, November 2004. Abstract: A sound source localization system has been developed based on a decentralized sensor network. Decentralization permits all nodes in a network to handle their own processing and decision-making, and as a result, reduce network congestion and the need for a centralized processor. The system consists of an array of battery operated COTS Ethernet-ready embedded systems with an attached microphone circuit. The localization solution requires groups of at least four nodes to be active within the array to return an acceptable two-dimensional result. Sensor nodes, positioned randomly A sound source localization system has been developed based on a decentralized sensor network. Decentralization permits all nodes in a network to handle their own processing and decision-making, and as a result, reduce network congestion and the need for a centralized processor. The system consists of an array of battery operated COTS Ethernet-ready embedded systems with an attached microphone circuit. The localization solution requires groups of at least four nodes to be active within the array to return an acceptable two-dimensional result. Sensor nodes, positioned randomly Foundations of Hybrid and Embedded Systems and Software 34 over a ten square meter area, recorded detection times of impulsive sources with microsecond resolution. In order to achieve a scalable system, nodes were organized in groups of from 4 to 10 nodes. Grouping was determined by the selecting the nodes farthest apart from each other. A designated leader of each group analyzed the sound source arrival times and calculated the sound source location based on time-differences of arrival. Experimental results show that this approach to sound source localization can achieve accuracies of about 30 cm . Perhaps more importantly though, it is accomplished in a decentralized manner which can lead to a more flexible, scalable distributed sensor network. [15] Web Service Interfaces D. Beyer, A. Chakrabarti, T. A. Henzinger, Proc. 14th International World Wide Web Conference (WWW 2005), Chiba, Japan, May 10—14 2005. Abstract: We present a language for specifying web service interfaces. A web service interface puts three kinds of constraints on the users of the service. We present a language for specifying web service interfaces. A web service interface puts three kinds of constraints on the users of the service. First, the interface specifies the methods that can be called by a client, together with types of input and output parameters; these are called signature constraints. Second, the interface may specify propositional constraints on method calls and output values that may occur in a web service conversation; these are called consistency constraints. Third, the interface may specify temporal constraints on the ordering of method calls; these are called protocol constraints. The interfaces can be used to check, first, if two or more web services are compatible, and second, if a web service A can be safely substituted for a web service B. The algorithm for compatibility checking verifies that two or more interfaces fulfill each others' constraints. The algorithm for substitutivity checking verifies that service A demands fewer and fulfills more constraints than service B. [16] Online Model-Based Diagnosis to Support Autonomous Operation of an Advanced Life Support System G. Biswas, E.J. Manders, J.W. Ramirez, N. Mahadevan, S. Abdelwahed, Habitation: International Journal of Human Support Research, vol. 10, no. 1, pp. 21-38, 2004. Abstract: This article describes methods for online model-based diagnosis of subsystems of the advanced life support system (ALS). The diagnosis methodology is tailored to detect, isolate, and identify faults in components of the system quickly so that faultadaptive control techniques can be applied to maintain system operation without interruption. We describe the components of our hybrid modeling scheme and the diagnosis methodology, and then demonstrate the effectiveness of this methodology by building a detailed model of the reverse osmosis (RO) system of the water recovery system (WRS) of the ALS. This model is validated with real data collected from an experimental testbed at NASA JSC. A number of diagnosis experiments run on simulated faulty data are presented and the results are discussed. This article describes methods for online model-based diagnosis of subsystems of the advanced life support system (ALS). The diagnosis methodology is tailored to detect, isolate, and identify faults in components of the system quickly so that faultadaptive control techniques can be applied to maintain system operation without interruption. We describe the components of our hybrid modeling scheme and the diagnosis methodology, and then demonstrate the effectiveness of this methodology by building a detailed model of the reverse osmosis (RO) system of the water recovery system (WRS) of the ALS. This model is validated with real data collected from an experimental testbed at NASA JSC. A number of diagnosis experiments run on simulated faulty data are presented and the results are discussed. [17] HyVisual: A Hybrid System Visual Modeler Foundations of Hybrid and Embedded Systems and Software 35 C. Brooks, A. Cataldo, E. A. Lee, J. Liu, X. Liu, S. Neuendorffer, H. Zheng, Technical Memorandum UCB/ERL M04/18/, University of California, Berkeley, June 28, 2004. Abstract: The Hybrid System Visual Modeler (HyVisual) is a block-diagram editor and simulator for continuous-time dynamical systems and hybrid systems. Hybrid systems mix continuous-time dynamics, discrete events, and discrete mode changes. This visual modeler supports construction of hierarchical hybrid systems. It uses a block-diagram representation of ordinary differential equations (ODEs) to define continuous dynamics, and allows mixing of continuous-time signals with events that are discrete in time. It uses a bubble-and-arc diagram representation of finite state machines to define discrete behavior driven by mode transitions. The Hybrid System Visual Modeler (HyVisual) is a block-diagram editor and simulator for continuous-time dynamical systems and hybrid systems. Hybrid systems mix continuous-time dynamics, discrete events, and discrete mode changes. This visual modeler supports construction of hierarchical hybrid systems. It uses a block-diagram representation of ordinary differential equations (ODEs) to define continuous dynamics, and allows mixing of continuous-time signals with events that are discrete in time. It uses a bubble-and-arc diagram representation of finite state machines to define discrete behavior driven by mode transitions. In this document, we describe how to graphically construct models and how to interpret the resulting models. HyVisual provides a sophisticated numerical solver that simulates the continuous-time dynamics, and effective use of the system requires at least a rudimentary understanding of the properties of the solver. This document provides a tutorial that will enable the reader to construct elaborate models and to have confidence in the results of a simulation of those models. We begin by explaining how to describe continuous-time models of classical dynamical systems, and then progress to the construction of mixed signal and hybrid systems. The intended audience for this document is an engineer with at least a rudimentary understanding of the theory of continuous-time dynamical systems (ordinary differential equations and Laplace transform representations), who wishes to build models of such systems, and who wishes to learn about hybrid systems and build models of hybrid systems. HyVisual is built on top of Ptolemy II, a framework supporting the construction of such domain-specific tools. See Ptolemy II for more information. [18] Heterogeneous Concurrent Modeling and Design in Java (Volume 1: Introduction to Ptolemy II) C. Brooks, E.A. Lee, X. Liu, S. Neuendorffer, Y. Zhao, H. Zheng (eds.), Technical Memorandum UCB/ERL M04/27/, University of California, Berkeley, July 29, 2004. Abstract: This volume describes how to construct Ptolemy II models for web-based modeling or building applications. The first chapter includes an overview of Ptolemy II software, and a brief description of each of the models of computation that have been implemented. It describes the package structure of the software, and includes as an appendix a brief tutorial on UML notation, which is used throughout the documentation to explain the structure of the software. The second chapter is a tutorial on building models using Vergil, a graphical user interface where models are built pictorially. The third chapter discusses the Ptolemy II expression language, which is used to set parameter values. The next chapter gives an overview of actor libraries. These three chapters, plus one of the domain chapters, will be sufficient for users to start building interesting This volume describes how to construct Ptolemy II models for web-based modeling or building applications. The first chapter includes an overview of Ptolemy II software, and a brief description of each of the models of computation that have been implemented. It describes the package structure of the software, and includes as an appendix a brief tutorial on UML notation, which is used throughout the documentation to explain the structure of the software. The second chapter is a tutorial on building models using Vergil, a graphical user interface where models are built pictorially. The third chapter discusses the Ptolemy II expression language, which is used to set parameter values. The next chapter gives an overview of actor libraries. These three chapters, plus one of the domain chapters, will be sufficient for users to start building interesting Foundations of Hybrid and Embedded Systems and Software 36 models in the selected domain. The fifth chapter gives a tutorial on designing actors in Java.The sixth chapter explains MoML, the XML schema used by Vergil to store models. And the seventh chapter, the final one in this part, explains how to construct custom applets. Volume 2 describes the software architecture of Ptolemy II, and volume 3 describes the domains, each of which implements a model of computation. [19] Heterogeneous Concurrent Modeling and Design in Java (Volume 2: Ptolemy II Software Architecture) C. Brooks, E. A. Lee, X. Liu, S. Neuendorffer, Y. Zhao, H. Zheng (eds.), Technical Memorandum UCB/ERL M04/16/, University of California, Berkeley, June 24, 2004. Abstract: This volume describes the software architecture of Ptolemy II. The first chapter covers the kernel package, which provides a set of Java classes supporting clustered graph topologies for models. Cluster graphs provide a very general abstract syntax for component-based modeling, without assuming or imposing any semantics on the models. The actor package begins to add semantics by providing basic infrastructure for data transport between components. The data package provides classes to encapsulate the data that is transported. It also provides an extensible type system and an interpreted expression language. The graph package provides graph-theoretic algorithms that are used in the type system and by schedulers in the individual domains. The plot package provides a visual data plotting utility that is used in many of the applets and applications. Vergil is the graphical front end to Ptolemy II and Vergil itself uses Ptolemy II to describe its own configuration. This volume describes the software architecture of Ptolemy II. The first chapter covers the kernel package, which provides a set of Java classes supporting clustered graph topologies for models. Cluster graphs provide a very general abstract syntax for component-based modeling, without assuming or imposing any semantics on the models. The actor package begins to add semantics by providing basic infrastructure for data transport between components. The data package provides classes to encapsulate the data that is transported. It also provides an extensible type system and an interpreted expression language. The graph package provides graph-theoretic algorithms that are used in the type system and by schedulers in the individual domains. The plot package provides a visual data plotting utility that is used in many of the applets and applications. Vergil is the graphical front end to Ptolemy II and Vergil itself uses Ptolemy II to describe its own configuration. Volume 1 gives an introduction to Ptolemy II, including tutorials on the use of the software, and volume 3 describes the domains, each of which implements a model of computation. [20] Heterogeneous Concurrent Modeling and Design in Java (Volume 3: Ptolemy II Domains) C. Brooks, E. A. Lee, X. Liu, S. Neuendorffer, Y. Zhao, H. Zheng (eds.), Technical Memorandum UCB/ERL M04/17/, University of California, Berkeley, June 24, 2004. Abstract: This volume describes Ptolemy II domains. The domains implement models of computation, which are summarized in chapter 1. Most of these models of computation can be viewed as a framework for componentbased design, where the framework defines the interaction mechanism between the components. Some of the domains (CSP, DDE, and PN) are thread-oriented, meaning that the components implement Java threads. These can be viewed, therefore, as abstractions upon which to build threaded Java programs. These abstractions are much easier to use (much higher level) than the raw threads and monitors of Java. Others (CT, DE, SDF) of the domains implement their own scheduling between actors, rather than relying on threads. This usual results in much more efficient execution. The Giotto domain, which addresses realThis volume describes Ptolemy II domains. The domains implement models of computation, which are summarized in chapter 1. Most of these models of computation can be viewed as a framework for componentbased design, where the framework defines the interaction mechanism between the components. Some of the domains (CSP, DDE, and PN) are thread-oriented, meaning that the components implement Java threads. These can be viewed, therefore, as abstractions upon which to build threaded Java programs. These abstractions are much easier to use (much higher level) than the raw threads and monitors of Java. Others (CT, DE, SDF) of the domains implement their own scheduling between actors, rather than relying on threads. This usual results in much more efficient execution. The Giotto domain, which addresses realFoundations of Hybrid and Embedded Systems and Software 37 time computation, is not threaded, but has concurrency features similar to threaded domains. The FSM domain is in a category by itself, since in it, the components are not producers and consumers of data, but rather are states. The non-threaded domains are described first, followed by FSM and Giotto, followed by the threaded domains. Within this grouping, the domains are ordered alphabetically (which is an arbitrary choice). Volume 1 is an introduction to Ptolemy II, including tutorials on use of the software, and volume 2 describes the Ptolemy II software architecture. [21] Discrete-Event Systems: Generalizing Metric Spaces and Fixed Point Semantics A. Cataldo, E. A. Lee, X. Liu, E. Matsikoudis, H. Zheng, 16th International Conference on Concurrecy Theory (CONCUR 2005), (submitted for publication), San Francisco, CA, August 2005. Abstract: This paper studies the semantics of discrete-event systems as a concurrent model of computation. The classical approach, which is based on metric spaces, does not handle well multiplicities of simultaneous events, yet such simultaneity is a common property of discrete-event models and modeling languages. (Consider, for example, delta time in VHDL.) In this paper, we develop a semantics using an extended notion of time. We give a generalization of metric spaces that we call tetric spaces. (A tetric functions like a metric, but its value is an element of a totally-ordered monoid rather than an element of the non-negative reals.) A straightforward generalization of the Banach fixed point theorem to tetric spaces supports the definition of a fixed-point semantics and generalizations of well-known sufficient conditions for avoidance of Zeno conditions. This paper studies the semantics of discrete-event systems as a concurrent model of computation. The classical approach, which is based on metric spaces, does not handle well multiplicities of simultaneous events, yet such simultaneity is a common property of discrete-event models and modeling languages. (Consider, for example, delta time in VHDL.) In this paper, we develop a semantics using an extended notion of time. We give a generalization of metric spaces that we call tetric spaces. (A tetric functions like a metric, but its value is an element of a totally-ordered monoid rather than an element of the non-negative reals.) A straightforward generalization of the Banach fixed point theorem to tetric spaces supports the definition of a fixed-point semantics and generalizations of well-known sufficient conditions for avoidance of Zeno conditions. [22] Verifying Quantitative Properties Using Bound Functions A. Chakrabarti, K. Chatterjee, T. A. Henzinger, O. Kupferman and R. Majumdar, In Proc. 13th Advanced Research Working Conference on Correct Hardware Design and Verification Methods (CHARME 2005), Saarbrucken, Germany, October 3—6, 2005. Abstract: In the boolean framework of model-based specification and verification, systems are graphs where each state is labeled with boolean propositions, and properties are languages where each trace has a boolean value (i.e., a trace either satisfies a property or it does not). We define and study a quantitative generalization of this traditional setting, where propositions have integer values at states, and properties have integer values on traces. For example, the value of a quantitative proposition at a state may represent power consumed at the state, and the value of a quantitative property on a trace may represent energy used along the trace. The value of a quantitative property at a state, then, is the maximum (or minimum) value achievable over all possible traces from the state. In this quantitative framework, model checking can be used to compute, for example, the minimum battery capacity necessary for achieving a given objective, or the maximal achievable lifetime of a system with a given initial battery capacity. In the case of open systems, these problems require the solution of games with integer values. In the boolean framework of model-based specification and verification, systems are graphs where each state is labeled with boolean propositions, and properties are languages where each trace has a boolean value (i.e., a trace either satisfies a property or it does not). We define and study a quantitative generalization of this traditional setting, where propositions have integer values at states, and properties have integer values on traces. For example, the value of a quantitative proposition at a state may represent power consumed at the state, and the value of a quantitative property on a trace may represent energy used along the trace. The value of a quantitative property at a state, then, is the maximum (or minimum) value achievable over all possible traces from the state. In this quantitative framework, model checking can be used to compute, for example, the minimum battery capacity necessary for achieving a given objective, or the maximal achievable lifetime of a system with a given initial battery capacity. In the case of open systems, these problems require the solution of games with integer values. Foundations of Hybrid and Embedded Systems and Software 38 Quantitative model-checking or game-solving is undecidable, except if bounds on the computation can be found. Indeed, many interesting quantitative properties, like minimal necessary battery capacity and maximal achievable lifetime, can be naturally specified by a quantitative-bound automaton, which consists of (1) a finite automaton with integer registers, and (2) a bound function f that maps each system K to an integer f(K). While a traditional automaton accepts or rejects a given trace, a quantitative automaton maps each trace to an integer. The bound function f(K) defines an upper bound on register values which depends on the system K. We show that every quantitative-bound automaton defines a dynamic program that provides model-checking and game-solving algorithms. Along with the linear-time, automaton-based view of quantitative verification, we present a corresponding branching-time view based on a quantitative-bound mu-calculus. We study the relationship, expressive power, and complexity of both views. [23] Two-player Nonzero-sum ω-regular Games Krishnendu Chatterjee, In Proc. of 16th International Conference on Concurrency Theory, (submitted for publication), San Francisco, CA, August 23—26, 2005. Abstract: We study infinite stochastic games played by two-players on a finite graph with goals specified by sets of infinite traces. The games are concurrent (each player simultaneously and independently chooses an action at each round), stochastic (the next state is determined by a probability distribution depending on the current state and the chosen actions), infinite (the game continues for an infinite number of rounds), nonzerosum (the players' goals are not necessarily conflicting), and undiscounted. We show that if each player has an ω-regular objective expressed as a parity objective, then there exists an $\epsilon$-Nash equilibrium, for every ε >0. However, exact Nash equilibria need not exist. We study the complexity of finding values (payoff profile) of an ε-Nash equilibrium. We show that the values of an ε-Nash equilibrium in nonzero-sum concurrent parity games can be computed by solving the following two simpler problems: computing the values of zero-sum (the goals of the players are strictly conflicting) concurrent parity games and computing ε-Nash equilibrium values of nonzero-sum concurrent games with reachability objectives. As a consequence we establish that values of an ε-Nash equilibrium can be approximated in FNP (functional NP), and hence in EXPTIME. We study infinite stochastic games played by two-players on a finite graph with goals specified by sets of infinite traces. The games are concurrent (each player simultaneously and independently chooses an action at each round), stochastic (the next state is determined by a probability distribution depending on the current state and the chosen actions), infinite (the game continues for an infinite number of rounds), nonzerosum (the players' goals are not necessarily conflicting), and undiscounted. We show that if each player has an ω-regular objective expressed as a parity objective, then there exists an $\epsilon$-Nash equilibrium, for every ε >0. However, exact Nash equilibria need not exist. We study the complexity of finding values (payoff profile) of an ε-Nash equilibrium. We show that the values of an ε-Nash equilibrium in nonzero-sum concurrent parity games can be computed by solving the following two simpler problems: computing the values of zero-sum (the goals of the players are strictly conflicting) concurrent parity games and computing ε-Nash equilibrium values of nonzero-sum concurrent games with reachability objectives. As a consequence we establish that values of an ε-Nash equilibrium can be approximated in FNP (functional NP), and hence in EXPTIME. [24] The Complexity of Stochastic Rabin and Streett Games K. Chatterjee, L. de Alfaro, T. A. Henzinger, 32nd International Colloquium of Automata, Languages and Programming (ICALP 05), (submitted for publication), Lisboa, Portugal, July 11--15, 2005. Abstract: The theory of graph games with $\omega$-regular winning conditions The theory of graph games with $\omega$-regular winning conditions is the foundation for modeling and synthesizing reactive processes. In the case of stochastic reactive processes, the corresponding stochastic graph games have three players, two of them (System and Environment) behaving adversarially, and the third (Uncertainty) behaving probabilistically. We consider two problems for stochastic graph Foundations of Hybrid and Embedded Systems and Software 39 games: the {\em qualitative\/} problem asks for the set of states from which a player can win with probability~1(\emph{almost-sure winning}); the {\em quantitative\/} problem asks for the maximal probability of winning (\emph{optimal winning}) from each state. We show that for Rabin winning conditions, both problems are in NP. As these problems were known to be NP-hard, it follows that they are NP-complete for Rabin conditions, and dually, coNP-complete for Streett conditions. The proof proceeds by showing that pure memoryless strategies suffice for qualitatively and quantitatively winning stochastic graph games with Rabin conditions. This insight is of interest in its own right, as it implies that controllers for Rabin objectives have simple implementations. We also prove that for every $\omega$-regular condition, optimal winning strategies are no more complex than almost-sure winning strategies. [25] Trading Memory for Randomness K. Chatterjee, L. de Alfaro, T. A. Henzinger. In Proc. 1st International Conference on Quantitative Evaluation of Systems (QEST 04), University of Twente, Enschede, The Netherlands, September 27 --30, 2004. Abstract: Strategies in repeated games can be classified as to whether or not they use memory and/or randomization. the deterministic and probabilistic varieties. We characterize when memory and/or randomization are required for winning with respect to various classes of $\omega$-regular objectives, noting particularly when the use of memory can be traded for the use of randomization. In particular, we show that Markov decision processes allow randomized memoryless optimal strategies for all M\"uller objectives. Furthermore, we show that 2-player probabilistic graph games allow randomized memoryless strategies for winning with probability~1 those M\"uller objectives which are upward-closed. Upward-closure means that if a set $\alpha$ of infinitely repeating vertices is winning, then all supersets of $\alpha$ are also winning. Strategies in repeated games can be classified as to whether or not they use memory and/or randomization. the deterministic and probabilistic varieties. We characterize when memory and/or randomization are required for winning with respect to various classes of $\omega$-regular objectives, noting particularly when the use of memory can be traded for the use of randomization. In particular, we show that Markov decision processes allow randomized memoryless optimal strategies for all M\"uller objectives. Furthermore, we show that 2-player probabilistic graph games allow randomized memoryless strategies for winning with probability~1 those M\"uller objectives which are upward-closed. Upward-closure means that if a set $\alpha$ of infinitely repeating vertices is winning, then all supersets of $\alpha$ are also winning. [26] Mean-Payoff Parity Games K. Chatterjee, T. A. Henzinger, M. Jurdzinski, 20th Annual Symposium of Logics in Computer Science (LICS 05), (submitted for publication), Chicago, IL, June 26—29, 2005 Abstract: Games played on graphs may have qualitative objectives, such as the satisfaction of an $\omega$-regular property, or quantitative objectives, such as the optimization of a real-valued reward. When games are used to model reactive systems with both fairness assumptions and quantitative (e.g., resource) constraints, then the corresponding objective combines both a qualitative and a quantitative component. In a general case of interest, the qualitative component is a {\it parity} condition and the quantitative component is a{\it mean-payoff} reward. We study and solve such meanpayoff parity games. We also prove some interesting facts about mean-payoff parity games which distinguish them both from mean-payoff and from parity games. In particular, we show that optimal strategies exist in mean-payoff parity games, but they may require infinite memory. Games played on graphs may have qualitative objectives, such as the satisfaction of an $\omega$-regular property, or quantitative objectives, such as the optimization of a real-valued reward. When games are used to model reactive systems with both fairness assumptions and quantitative (e.g., resource) constraints, then the corresponding objective combines both a qualitative and a quantitative component. In a general case of interest, the qualitative component is a {\it parity} condition and the quantitative component is a{\it mean-payoff} reward. We study and solve such meanpayoff parity games. We also prove some interesting facts about mean-payoff parity games which distinguish them both from mean-payoff and from parity games. In particular, we show that optimal strategies exist in mean-payoff parity games, but they may require infinite memory. [27] Counter-example Guided Planning Foundations of Hybrid and Embedded Systems and Software 40 K. Chatterjee, T. A. Henzinger, R. Jhala, R. Majumdar, 21st International Conference in Uncertainty in Artifical Intelligence (UAI 05), (submitted for publication), University of Edinburgh, Edinburgh, Scotland, July 26—29, 2005. Abstract: Planning in adversarial and uncertain environments can be modeled as the problem of devising strategies in stochastic perfect information games. These games are generalizations of Markov decision processes (MDPs): there are two (adversarial) players, and a source of randomness. The main practical obstacle to computing winning strategies in such games is the size of the state space. In practice therefore, one typically works with {\em abstractions} of the model. The difficulty, of course, is to come up with an abstraction that is neither too coarse to remove all winning strategies (plans), nor too fine to be intractable. In verification, the paradigm of {\em counterexample-guided abstraction refinement} has been successful to construct useful but parsimonious abstractions {\em automatically}.We extend this paradigm, for the first time, to {\em probabilistic} models (namely, $\twohalf$ games and, as a special case, MDPs). This allows us to apply the counterexample-guided abstraction paradigm to the AI planning problem. As special cases, we get planning algorithms for MDPs and deterministic systems that automatically construct system abstractions. Planning in adversarial and uncertain environments can be modeled as the problem of devising strategies in stochastic perfect information games. These games are generalizations of Markov decision processes (MDPs): there are two (adversarial) players, and a source of randomness. The main practical obstacle to computing winning strategies in such games is the size of the state space. In practice therefore, one typically works with {\em abstractions} of the model. The difficulty, of course, is to come up with an abstraction that is neither too coarse to remove all winning strategies (plans), nor too fine to be intractable. In verification, the paradigm of {\em counterexample-guided abstraction refinement} has been successful to construct useful but parsimonious abstractions {\em automatically}.We extend this paradigm, for the first time, to {\em probabilistic} models (namely, $\twohalf$ games and, as a special case, MDPs). This allows us to apply the counterexample-guided abstraction paradigm to the AI planning problem. As special cases, we get planning algorithms for MDPs and deterministic systems that automatically construct system abstractions. [28] galsC: A Language for Event-Driven Embedded Systems E. Cheong, J. Liu, Presented at Design, Automation and Test in Europe (DATE), Munich, Germany, March 7--11, 2005. Abstract: We introduce galsC, a language designed for programming event-driven embedded systems such as sensor networks. galsC implements the TinyGALS programming model. At the local level, software components are linked via synchronous method calls to form actors. At the global level, actors communicate with each other asynchronously via message passing, which separates the flow of control between actors. A complementary model called TinyGUYS is a guarded yet synchronous model designed to allow thread-safe sharing of global state between actors via parameters without explicitly passing messages. The galsC compiler extends the nesC compiler, which allows for better type checking and code generation. Having a well-structured concurrency model at the application level greatly reduces the risk of concurrency errors, such as deadlock and race conditions. The galsC language is implemented on the Berkeley motes and is compatible with the TinyOS/nesC component library. We use a multi-hop wireless sensor network as an example to illustrate the effectiveness of the language. We introduce galsC, a language designed for programming event-driven embedded systems such as sensor networks. galsC implements the TinyGALS programming model. At the local level, software components are linked via synchronous method calls to form actors. At the global level, actors communicate with each other asynchronously via message passing, which separates the flow of control between actors. A complementary model called TinyGUYS is a guarded yet synchronous model designed to allow thread-safe sharing of global state between actors via parameters without explicitly passing messages. The galsC compiler extends the nesC compiler, which allows for better type checking and code generation. Having a well-structured concurrency model at the application level greatly reduces the risk of concurrency errors, such as deadlock and race conditions. The galsC language is implemented on the Berkeley motes and is compatible with the TinyOS/nesC component library. We use a multi-hop wireless sensor network as an example to illustrate the effectiveness of the language. [29] Toward a Semantic Anchoring Infrastructure for Domain Specific Modeling Languages Foundations of Hybrid and Embedded Systems and Software 41 K. Chen, J. Sztipanovits, S. Neema, M. Emerson, S. Abdelwahed, Embedded Systems Software Conference (EMSOFT 2005), (submitted for publication), Jersey City, NJ, September 18—22, 2005. Abstract: Metamodeling facilitates the rapid, inexpensive development of domainspecific modeling languages (DSML-s). However, there are still challenges hindering the wide-scale industrial application of model-based design. One of these unsolved problems is the lack of a practical, effective method for the formal specification of DSML semantics. This problem has negative impact on reusability of DSML-s and analysis tools in domain specific tool chains. To address these issues, we propose a formal well founded methodology with supporting tools to anchor the semantics of DSML-s to precisely defined and validated “semantic units”. In our methodology, each of the syntactic and semantic DSML components is defined precisely and completely. The main contribution of our approach is that it moves toward an infrastructure for DSML design that integrates formal methods with practical engineering tools. In this paper we use a mathematical model, Abstract State Machines, a common semantic framework to define the semantic domains of DSML-s. Metamodeling facilitates the rapid, inexpensive development of domainspecific modeling languages (DSML-s). However, there are still challenges hindering the wide-scale industrial application of model-based design. One of these unsolved problems is the lack of a practical, effective method for the formal specification of DSML semantics. This problem has negative impact on reusability of DSML-s and analysis tools in domain specific tool chains. To address these issues, we propose a formal well founded methodology with supporting tools to anchor the semantics of DSML-s to precisely defined and validated “semantic units”. In our methodology, each of the syntactic and semantic DSML components is defined precisely and completely. The main contribution of our approach is that it moves toward an infrastructure for DSML design that integrates formal methods with practical engineering tools. In this paper we use a mathematical model, Abstract State Machines, a common semantic framework to define the semantic domains of DSML-s. [30] New Congestion Control Schemes Over Wireless Networks: Stability Analysis M. Chen, A. Abate, S. Sastry. In Proc. International Federation of Automatic Control World Congress, (in publication), Prague, July 2005. Abstract: This paper proposes two new congestion control schemes for packet switched wireless networks. Starting from the seminal work of Kelly (Kelly et al., Dec 1999), we consider the decentralized flow control model for a TCP-like scheme and extend it to the wireless scenario. Motivated by the presence of channel errors, we introduce updates in the part of the model representing the number of connections the user establishes with the network; this assumption has important physical interpretation. Specifically, we propose two updates: the ̄rst is static, while the second evolves dynamically. The global stability of both schemes has been proved; also, a stochastic stability study and the rate of convergence of the two algorithms have been investigated. This paper focuses on the delay sensitivity of both schemes. A stability condition on the parameters of the system is introduced and proved. Moreover, some deeper insight on the structure of the oscillations of the system is attained. To support the theoretical results, simulations are provided. This paper proposes two new congestion control schemes for packet switched wireless networks. Starting from the seminal work of Kelly (Kelly et al., Dec 1999), we consider the decentralized flow control model for a TCP-like scheme and extend it to the wireless scenario. Motivated by the presence of channel errors, we introduce updates in the part of the model representing the number of connections the user establishes with the network; this assumption has important physical interpretation. Specifically, we propose two updates: the ̄rst is static, while the second evolves dynamically. The global stability of both schemes has been proved; also, a stochastic stability study and the rate of convergence of the two algorithms have been investigated. This paper focuses on the delay sensitivity of both schemes. A stability condition on the parameters of the system is introduced and proved. Moreover, some deeper insight on the structure of the oscillations of the system is attained. To support the theoretical results, simulations are provided. [31] Stability and Delay Considerations for Flow Control Over Wireless Networks M.Chen, A. Abate, A. Zakhor, S. Sastry, UCB ERL Tech Report No M05/14, Berkeley, CA, 2005. Abstract: In this paper we develop a general framework for the problem of flow control over wireless networks, evaluate the existing approaches within that framework, and propose new ones. Significant progress has been made on the mathematical modeling of flow control for the wired Internet, among which Kelly’s contribution is widely accepted as a standard framework. We extend Kelly’s flow control framework to the wireless scenario, where the wireless link is assumed to have a fixed link capacity and a packet loss rate caused by the physical channel errors. In this framework, the problem of flow In this paper we develop a general framework for the problem of flow control over wireless networks, evaluate the existing approaches within that framework, and propose new ones. Significant progress has been made on the mathematical modeling of flow control for the wired Internet, among which Kelly’s contribution is widely accepted as a standard framework. We extend Kelly’s flow control framework to the wireless scenario, where the wireless link is assumed to have a fixed link capacity and a packet loss rate caused by the physical channel errors. In this framework, the problem of flow Foundations of Hybrid and Embedded Systems and Software 42 control over wireless can be formulated as a convex optimization problem with noisy feedback. We then propose two new solutions to the problem achieving optimal performance by only modifying the application layer. The global stability and the delay sensitivity of the schemes are investigated, and verified by numerical results. Our work advocates the use of multiple connections for flow, or congestion control, over wireless. [32] Simulation Based Deadlock Analysis for System Level Designs X. Chen, A. Davare, H. Hsieh, A. Sangiovanni-Vincentelli, Y. Watanabe, ACM/IEEE Design Automation Conference, (submitted for publication), Anaheim, CA, June 2005. Abstract: In the design of highly complex, heterogeneous, and concurrent systems, deadlock detection and resolution remains an important issue. In this paper, we systematically analyze the synchronization dependencies in concurrent systems modeled in the Metropolis design environment, where system functions, high level architectures and function-architecture mappings can be modeled and simulated. We propose a data structure called the dynamic synchronization dependency graph, which captures the runtime (blocking) dependencies. A loop-detection algorithm is then used to detect deadlocks and help designers quickly isolate and identify modeling errors that cause the deadlock problems. We demonstrate our approach through a real world design example, which is a complex functional model for video processing and a high level model of function-architecture mapping. In the design of highly complex, heterogeneous, and concurrent systems, deadlock detection and resolution remains an important issue. In this paper, we systematically analyze the synchronization dependencies in concurrent systems modeled in the Metropolis design environment, where system functions, high level architectures and function-architecture mappings can be modeled and simulated. We propose a data structure called the dynamic synchronization dependency graph, which captures the runtime (blocking) dependencies. A loop-detection algorithm is then used to detect deadlocks and help designers quickly isolate and identify modeling errors that cause the deadlock problems. We demonstrate our approach through a real world design example, which is a complex functional model for video processing and a high level model of function-architecture mapping. [33] The Best of Both Worlds: The Efficient Asynchronous Implementation of Synchronous Specifications A. Davare, K. Lwin, A. Kondratyev, A. Sangiovanni-Vincentelli. Presented at ACM/IEEE Design Automation Conference, San Diego, CA, June 7 --11, 2004. Abstract: The desynchronization approach combines a traditional synchronous specification style with a robust asynchronous implementation model. The main contribution of this paper is the description of two optimizations that decrease the overhead of desynchronization. First, we investigate the use of clustering to vary the granularity of desynchronization. Second, by applying temporal analysis on a formal execution model of the desynchronized design, we uncover significant amounts of timing slack. These methods are successfully applied to industrial RTL designs. The desynchronization approach combines a traditional synchronous specification style with a robust asynchronous implementation model. The main contribution of this paper is the description of two optimizations that decrease the overhead of desynchronization. First, we investigate the use of clustering to vary the granularity of desynchronization. Second, by applying temporal analysis on a formal execution model of the desynchronized design, we uncover significant amounts of timing slack. These methods are successfully applied to industrial RTL designs. [34] Overview of the Ptolemy Project J. Davis, C. Hylands., J. Janneck, E.A. Lee, J. Liu, X.Liu, S. Neuendorffer, S. Sachs, M. Stewart, K. Vissers, P. Whitaker, Y. Xiong, Technical Memorandum UCB/ERL M01/11, EECS, University of California, Berkeley, March 6, 2001. [35] Implementing and Testing a Nonlinear Model Predictive Tracking Controller for Aerial Pursuit Evasion Games on a Fixed Wing Aircraft Foundations of Hybrid and Embedded Systems and Software 43 J. M. Eklund, J. Sprinkle, S. S. Sastry, American Control Conference (ACC) 2005, (In Publication), Portland, OR, Jun., 8-10, 2005. Abstract: The capability of Unmanned Aerial Vehicles (UAVs) to perform autonomously has not yet been demonstrated, however this is an important step to enable at least limited autonomy in such aircraft to allow then to operate with temporary loss of remote control, or when confronted with an adversary or obstacles for which remote control is insufficient. Such capabilities have been under development through Software Enabled Control(SEC) program and were recently tested in the Capstone Demonstration of that program. In this paper the final simulation and flight test results are presented for a Non-linear Model Predictive Controller (NMPC) used in evasive maneuvers in three dimensions on a fixed wing UAV for the purposes of pursuit/evasion games with a piloted F-15 aircraft. The capability of Unmanned Aerial Vehicles (UAVs) to perform autonomously has not yet been demonstrated, however this is an important step to enable at least limited autonomy in such aircraft to allow then to operate with temporary loss of remote control, or when confronted with an adversary or obstacles for which remote control is insufficient. Such capabilities have been under development through Software Enabled Control(SEC) program and were recently tested in the Capstone Demonstration of that program. In this paper the final simulation and flight test results are presented for a Non-linear Model Predictive Controller (NMPC) used in evasive maneuvers in three dimensions on a fixed wing UAV for the purposes of pursuit/evasion games with a piloted F-15 aircraft. [36] Template Based Planning and Distributed Control for Networks of Unmanned Underwater Vehicles J. M. Eklund, J. Sprinkle, S. S. Sastry, 44th IEEE Conference on Decision and Control and European Control Conference ECC 2005 (CDC-ECC'05), (submitted for publication), Seville, Spain, Dec., 12-15, 2005. Abstract: A decentralized control scheme for large packs of unmanned underwater vehicles (UUV) is proposed and investigated. This scheme is based on shared knowledge of a template, which includes operational plans, modes of operation, contingencies including the ability to adapt individual plans within the template to changing operational conditions, and the protocols for disseminating individual state, network and command information between UUVs. This template-based control enables complex and cooperative functionality by the network within the bounds of severe communications limitations and provides for a highly scalable solution for distributed control. Simulation results of medium-sized packs of UUVs are presented and the road ahead to physical implementation, experimentation and deployment is described. A decentralized control scheme for large packs of unmanned underwater vehicles (UUV) is proposed and investigated. This scheme is based on shared knowledge of a template, which includes operational plans, modes of operation, contingencies including the ability to adapt individual plans within the template to changing operational conditions, and the protocols for disseminating individual state, network and command information between UUVs. This template-based control enables complex and cooperative functionality by the network within the bounds of severe communications limitations and provides for a highly scalable solution for distributed control. Simulation results of medium-sized packs of UUVs are presented and the road ahead to physical implementation, experimentation and deployment is described. [37] A MOF-Based Metamodeling Environment M. Emerson, J. Sztipanovits, T. Bapty, Journal of Universal Computer Science, vol. 10, No. 10, pp. 1357-1382, October, 2004. Abstract: The Meta Object Facility (MOF) forms one of the core standards of the Object Management Group's Model Driven Architecture. It has several use-cases, including as a repository service for storing abstract models used in distributed object-oriented software development, a development environment for generating CORBA IDL, and a metamodeling language for the rapid specification, construction, and management of domain-specific technology-neutral modeling languages. This paper will focus on the use of MOF as a metamodeling language and describe our latest work on changing the MIC metamodeling environment from UML/OCL to MOF. We have implemented a functional graphical metamodeling environment based on the MOF v1.4 standard using GME and GReAT. This implementation serves as a testament to the power of formally well-defined metamodeling and metamodel-based model transformation approaches. Furthermore, our The Meta Object Facility (MOF) forms one of the core standards of the Object Management Group's Model Driven Architecture. It has several use-cases, including as a repository service for storing abstract models used in distributed object-oriented software development, a development environment for generating CORBA IDL, and a metamodeling language for the rapid specification, construction, and management of domain-specific technology-neutral modeling languages. This paper will focus on the use of MOF as a metamodeling language and describe our latest work on changing the MIC metamodeling environment from UML/OCL to MOF. We have implemented a functional graphical metamodeling environment based on the MOF v1.4 standard using GME and GReAT. This implementation serves as a testament to the power of formally well-defined metamodeling and metamodel-based model transformation approaches. Furthermore, our Foundations of Hybrid and Embedded Systems and Software 44 work gave us an opportunity to evaluate sevaral important features of MOF v1.4 as a metamodeling language: (a) Completeness of MOF v1.4 for defining the abstract syntax for complex (multiple aspect) DSML-s , (b) The Package concept for composing and reusing metamodels and (c) Facilities for modeling the mapping between the abstract and concrete syntax of DSML-s. [38] Implementing a MOF-Based Metamodeling Environment Using Graph Transformations M. Emerson, J. Sztipanovits, In Proc. 4th Workshop on Domain-Specific Modeling, pp. 83-92, 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), Vancouver, Canada, October 2004. Abstract: Versatile model-based design demands languages and tools which are suitable for the creation, manipulation, transformation, and composition of domain-specific modeling languages and domain models. The Meta Object Facility (MOF) forms the cornerstone of the OMG’s Model Driven Architecture (MDA) as the standard metamodeling language for the specification of domain-specific languages. We have implemented MOF v1.4 as an alternative metamodeling language for the Generic Modeling Environment (GME), the flagship tool of Model Integrated Computing (MIC). Our implementation utilizes model-to-model transformations specified with the Graph Rewriting and Transformation toolsuite (GReAT) to translate between MOF and the UML-based GME metamodeling language. The technique described by this paper illustrates the role graph transformations can play in interfacing MIC technology to new and evolving modeling standards. Versatile model-based design demands languages and tools which are suitable for the creation, manipulation, transformation, and composition of domain-specific modeling languages and domain models. The Meta Object Facility (MOF) forms the cornerstone of the OMG’s Model Driven Architecture (MDA) as the standard metamodeling language for the specification of domain-specific languages. We have implemented MOF v1.4 as an alternative metamodeling language for the Generic Modeling Environment (GME), the flagship tool of Model Integrated Computing (MIC). Our implementation utilizes model-to-model transformations specified with the Graph Rewriting and Transformation toolsuite (GReAT) to translate between MOF and the UML-based GME metamodeling language. The technique described by this paper illustrates the role graph transformations can play in interfacing MIC technology to new and evolving modeling standards. [39] Acoustic Self-Localization in a Distributed Sensor Network K. D. Frampton, IEEE Sensors Journal, (in publication), 2005. Abstract: The purpose of this work is to present a technique for determining the locations of nodes in a distributed sensor network. This technique is based on the Time Difference of Arrival (TDOA) of acoustic signals. In this scheme, several sound sources of known locations transmit while each node in the sensor network records the wave front time-of-arrival. Data from the nodes are transmitted to a central processor and the nonlinear TDOA equations are solved. Computational simulation results are presented in order to quantify the solution behavior and its sensitivity to likely error sources. Experimental self-localization results are also presented in order to demonstrate the potential for this approach in solving the challenging self-localization problem. The purpose of this work is to present a technique for determining the locations of nodes in a distributed sensor network. This technique is based on the Time Difference of Arrival (TDOA) of acoustic signals. In this scheme, several sound sources of known locations transmit while each node in the sensor network records the wave front time-of-arrival. Data from the nodes are transmitted to a central processor and the nonlinear TDOA equations are solved. Computational simulation results are presented in order to quantify the solution behavior and its sensitivity to likely error sources. Experimental self-localization results are also presented in order to demonstrate the potential for this approach in solving the challenging self-localization problem. [40] Distributed Group-Based Vibration Control with a Networked Embedded System K. D. Frampton, Journal of Intelligent Materials Systems and Structures, Vol. 14, pp. 307--314, 2005. Abstract: The purpose of this work is to demonstrate the performance of a distributed vibration control system based on a networked embedded system. The platform from which control is affected consists of a network of computational elements called nodes. The purpose of this work is to demonstrate the performance of a distributed vibration control system based on a networked embedded system. The platform from which control is affected consists of a network of computational elements called nodes. Foundations of Hybrid and Embedded Systems and Software 45 Each node possesses its own computational capability, sensor, actuator and the ability to communicate with other nodes via a wired or wireless network. The primary focus of this work is to demonstrate the use of existing group management middleware concepts to enable vibration control with such a distributed network. Group management middleware is distributed software that provides for the establishment and maintenance of groups of distributed nodes and that provides for the network communication within such groups. The reason for developing distributed control based on group concepts is that communication of real-time sensor and actuator data among all system nodes would not be possible due to bandwidth constraints. Group management middleware provides for inter-node communications among subsets of nodes in an efficient and scalable manner. The objective of demonstrating the effectiveness of such grouping for distributed control is met by designing distributed feedback compensators that take advantage of node groups in order to affect their control. Two types of node groups are considered: groups based on physical proximity and groups based on modal sensitivity. The global control objective is to minimize the vibrational response of a rectangular plate in specific modes while minimizing spillover to out-of-bandwidth modes. Results of this investigation demonstrate that such a distributed control system can achieve vibration attenuations comparable to that of a centralized controller. The importance of efficient use of network communications bandwidth is also discussed with regard to the control architectures considered. [41] Vibroacoustic Control with a Distributed Sensor Network K. D. Frampton, Presented at Applications of Graph Transformations with Industrial Relevance (AGTIVE04), Williamsburg, VA, September, 2004. Abstract: The purpose of this work is to demonstrate the ability of a distributed control system, based on a networked embedded system, to reduce acoustic radiation from a vibrating structure. The platform from which control is affected consists of a network of computational elements called nodes. Each node possesses its own computational capability, sensor, actuator and the ability to communicate with other nodes via a wired or wireless network. The primary focus of this work is to employ existing group management middleware concepts to enable vibration control with such a distributed network. Group management middleware is distributed software that provides for the establishment and maintenance of groups of distributed nodes and that provides for the network communication among such groups. This objective is met by designing distributed feedback compensators that take advantage of node groups in order to affect their control. Two types of node groups are considered: groups based on physical proximity and groups based on modal sensitivity. The global control objective is to minimize the vibrational response of a rectangular plate in specific modes while minimizing spillover to out-of-bandwidth modes. Results of this investigation demonstrate that such a distributed control system can achieve vibration attenuations comparable to that of a centralized controller. The importance of efficient use of network communications bandwidth is also discussed with regard to the control architectures considered. The purpose of this work is to demonstrate the ability of a distributed control system, based on a networked embedded system, to reduce acoustic radiation from a vibrating structure. The platform from which control is affected consists of a network of computational elements called nodes. Each node possesses its own computational capability, sensor, actuator and the ability to communicate with other nodes via a wired or wireless network. The primary focus of this work is to employ existing group management middleware concepts to enable vibration control with such a distributed network. Group management middleware is distributed software that provides for the establishment and maintenance of groups of distributed nodes and that provides for the network communication among such groups. This objective is met by designing distributed feedback compensators that take advantage of node groups in order to affect their control. Two types of node groups are considered: groups based on physical proximity and groups based on modal sensitivity. The global control objective is to minimize the vibrational response of a rectangular plate in specific modes while minimizing spillover to out-of-bandwidth modes. Results of this investigation demonstrate that such a distributed control system can achieve vibration attenuations comparable to that of a centralized controller. The importance of efficient use of network communications bandwidth is also discussed with regard to the control architectures considered. Foundations of Hybrid and Embedded Systems and Software 46 [42] Using Dependent Types to Certify the Safety of Assembly Code M. Harren, G. C. Necula, 12th International Static Analysis Symposium (SAS '05), (in publication), London, UK, September 7-9, 2005. Abstract: There are many source-level analyses or instrumentation tools that enforce various safety properties. In this paper we present an infrastructure that can be used to check independently that the assembly output of such tools has the desired safety properties. By working at assembly level we avoid the complications with unavailability of source code, with source-level parsing, and we certify the code that is actually deployed. The novel feature of the framework is an extensible dependently-typed framework that supports type inference and mutation of dependent values in memory. The type system can be extended with new types as needed for the source-level tool that is certified. Using these dependent types, we are able to express the invariants enforced by CCured, a source-level instrumentation tool that guarantees type safety in legacy C programs. We can therefore check that the x86 assembly code resulting from compilation with CCured is in fact type-safe. There are many source-level analyses or instrumentation tools that enforce various safety properties. In this paper we present an infrastructure that can be used to check independently that the assembly output of such tools has the desired safety properties. By working at assembly level we avoid the complications with unavailability of source code, with source-level parsing, and we certify the code that is actually deployed. The novel feature of the framework is an extensible dependently-typed framework that supports type inference and mutation of dependent values in memory. The type system can be extended with new types as needed for the source-level tool that is certified. Using these dependent types, we are able to express the invariants enforced by CCured, a source-level instrumentation tool that guarantees type safety in legacy C programs. We can therefore check that the x86 assembly code resulting from compilation with CCured is in fact type-safe. [43] Distributed Control to Improve Performance of Thermoelectric Coolers R. D. Harvey, D. G. Walker, K. D. Frampton, Presented at 2004 ASME International Mechanical Engineering Conference and Exposition, Anaheim CA, November 2004. Abstract: Thermoelectric coolers (TECs) have become more popular in chip cooling applications. However, due to material properties, the scope of TEC applicability is limited. The primary reason for this limitation stems from the poor efficiency of the TEC. This low efficiency causes increased heat production resulting in a very narrow band in which the TEC is effective. Since TECs are cooling units composed of numerous individual cooling elements, this band can be expanded by implementing distributed control of the individual cooler components. Distributed control is a system for allowing each element to be powered depending on the localized heat load. Distributed control would allow for increased cooling in hot spots while minimizing excess heat generated by the TEC in areas where it is not needed. The preliminary results suggest that this type of control may be feasible, and would result in a significant increase in the TEC effectiveness. The current work provides a closer look at the increased effectiveness and improves the previous models. The current model considers lateral heat conduction in the chip, as well as variable control of the individual cooling elements proportional to heat load. By modeling different scenarios for heat distributions and exploring the application of the individual cooling units, wider applicability of the TEC for computer chip cooling can be achieved. Thermoelectric coolers (TECs) have become more popular in chip cooling applications. However, due to material properties, the scope of TEC applicability is limited. The primary reason for this limitation stems from the poor efficiency of the TEC. This low efficiency causes increased heat production resulting in a very narrow band in which the TEC is effective. Since TECs are cooling units composed of numerous individual cooling elements, this band can be expanded by implementing distributed control of the individual cooler components. Distributed control is a system for allowing each element to be powered depending on the localized heat load. Distributed control would allow for increased cooling in hot spots while minimizing excess heat generated by the TEC in areas where it is not needed. The preliminary results suggest that this type of control may be feasible, and would result in a significant increase in the TEC effectiveness. The current work provides a closer look at the increased effectiveness and improves the previous models. The current model considers lateral heat conduction in the chip, as well as variable control of the individual cooling elements proportional to heat load. By modeling different scenarios for heat distributions and exploring the application of the individual cooling units, wider applicability of the TEC for computer chip cooling can be achieved. [44] Concerns on Separation: Modeling Separation of Concerns in the Semantics of Embedded Systems Foundations of Hybrid and Embedded Systems and Software 47 E. Jackson, J. Sztipanovits, In Proc. Embedded Software Systems Conference (EMSOFT’2005), (in publication), Jersey City, NJ, September 18—22, 2005. Abstract: Embedded systems are commonly abstracted as collections of interacting components. This perspective has lead to the insight that component behaviors can be defined separately from admissible component interactions. We show that this separation of concerns does not imply that component behaviors can be defined in isolation from their envisioned interaction models. We argue that a type of behavior/interaction codesign must be employed to successfully leverage the separation of these concerns. We present formal techniques for accomplishing this co-design and describe tools that implement these formalisms. Embedded systems are commonly abstracted as collections of interacting components. This perspective has lead to the insight that component behaviors can be defined separately from admissible component interactions. We show that this separation of concerns does not imply that component behaviors can be defined in isolation from their envisioned interaction models. We argue that a type of behavior/interaction codesign must be employed to successfully leverage the separation of these concerns. We present formal techniques for accomplishing this co-design and describe tools that implement these formalisms. [45] Graph Transformations in OMG's Model-Driven Architecture G. Karsai, A. Agrawal, In Proc. Applications of Graph Transformations with Industrial Relevance (AGTIVE 2003), LNCS 2062. pp. 243-259, Charlottesville, VA, September 29— October 1, 2003. Abstract: The Model-Driven Architecture (MDA) vision of the Object Management Group offers a unique opportunity for introducing Graph Transformation (GT) technology to the software industry. The paper proposes a domain-specific refinement of MDA, and describes a practical manifestation of MDA called Model-Integrated Computing (MIC). MIC extends MDA towards domain-specific modeling languages, and it is well supported by various generic tools that include model transformation tools based on graph transformations. The MIC tools are metaprogrammable, i.e. they can be tailored for specific domains using metamodels that include metamodels of transformations. The paper describes the development process and the supporting tools of MIC, and it raises a number of issues for future research on GT in MDA. The Model-Driven Architecture (MDA) vision of the Object Management Group offers a unique opportunity for introducing Graph Transformation (GT) technology to the software industry. The paper proposes a domain-specific refinement of MDA, and describes a practical manifestation of MDA called Model-Integrated Computing (MIC). MIC extends MDA towards domain-specific modeling languages, and it is well supported by various generic tools that include model transformation tools based on graph transformations. The MIC tools are metaprogrammable, i.e. they can be tailored for specific domains using metamodels that include metamodels of transformations. The paper describes the development process and the supporting tools of MIC, and it raises a number of issues for future research on GT in MDA. [46] Design Patterns for Open Tool Integration G. Karsai, A. Lang, S. Neema, Journal of Software and System Modeling, vol 4., no. 1, DOI: 10.1007/s10270-004-0073-y, 2004. Abstract: Design tool integration is a highly relevant area of software engineering, which can greatly improve the efficiency of development processes. Design patterns have been widely recognized as important contributors to the success of software systems. This paper describes and compares two large-grain, architectural design patterns that solve specific design tool integration problems. Both patterns have been implemented and used in real-life engineering processes. Design tool integration is a highly relevant area of software engineering, which can greatly improve the efficiency of development processes. Design patterns have been widely recognized as important contributors to the success of software systems. This paper describes and compares two large-grain, architectural design patterns that solve specific design tool integration problems. Both patterns have been implemented and used in real-life engineering processes. [47] Real-Time Systems Design in Ptolemy II: A Time-Triggered Approach V. Krishnan, Master's Report, Technical Memorandum UCB/ERL M04/22/, University of California, Berkeley, July 12, 2004. Abstract: In this report is described a software infrastructure to enable users to design hard real-time systems from Ptolemy II [1]. The Giotto [2] domain within the Ptolemy II In this report is described a software infrastructure to enable users to design hard real-time systems from Ptolemy II [1]. The Giotto [2] domain within the Ptolemy II Foundations of Hybrid and Embedded Systems and Software 48 design environment is made use of to model systems which are then compiled and executed on KURT-Linux [3], a real time flavor of Linux. The first stage of the software takes a graphical model as an input to generate intermediate code in the C language. This intermediate code consists of the task-code to be executed, as well as a representation of their timing requirements. The second stage, called the Embedded Machine [5] reads in the timing information and interprets it to release the tasks for execution as per the stated requirements. The released tasks can either be assigned to a standard scheduler such as EDF, or to a scheduling interpreter called the Scheduling machine, or S Machine. The S Machine was developed to gain fine grained control over the scheduling of tasks. The S Machine requires as input scheduling information that specifies a time line for the tasks involved thus giving the designer maximum flexibility over task scheduling, and consequently greater resource utilization. The E&S Machines when compiled along with the generated task and timing code for the KURT-Linux platform forms an executable that delivers predictable real-time performance. The benefit this approach offers is that the real-time tasks can run along with ordinary Linux tasks without the timing properties of the real-time tasks being affected. An audio application was designed to illustrate the effectiveness of this tool-flow, which achieved a timing uncertainty of less than 130 microseconds in its task execution times. [48] Engineering Education: A Focus on System, in Advances in Control, Communication Networks, and Transportation Systems: In Honor of Pravin Varaiya E. A. Lee, E.H. Abed (Ed.), Systems and Control: Foundations and Applications Series, Birkhauser, Boston, 2005. Abstract: Engineers have a major advantage over scientists. For the most part, the systems we analyze are of our own devising. It has not always been so. Not long ago, the principle objective of engineering was to coax physical materials to do our bidding by leveraging their intrinsic physical properties. The discipline was of "applied science." Today, a great deal of engineering is about coaxing abstractions that we have invented. The abstractions provided by microprocessors, programming languages, operating systems, and computer networks are only loosely linked to the underlying physics of electronics. Engineers have a major advantage over scientists. For the most part, the systems we analyze are of our own devising. It has not always been so. Not long ago, the principle objective of engineering was to coax physical materials to do our bidding by leveraging their intrinsic physical properties. The discipline was of "applied science." Today, a great deal of engineering is about coaxing abstractions that we have invented. The abstractions provided by microprocessors, programming languages, operating systems, and computer networks are only loosely linked to the underlying physics of electronics. [49] Absolutely Positively On Time: What Would It Take? Foundations of Hybrid and Embedded Systems and Software 49 E. A. Lee, Editorial, February 19, 2005, Available at http://ptolemy.eecs.berkeley.edu/publications/papers/05/EmbeddedSoftwareColumn/ Editorial, March 8, 2005: Despite considerable progress in software and hardware techniques, when embedded computing systems absolutely must meet tight timing constraints, many of the advances in computing become part of the problem rather than part of the solution. Although synchronous digital logic delivers precise timing determinacy, advances in computer architecture have made it difficult or impossible to estimate the execution time of software. Moreover, networking techniques introduce variability and stochastic behavior, and operating systems rely on best effort techniques. Worse, programming languages lack time in their semantics, so timing requirements are only specified indirectly. In this column, I examine the following question, "if precise timeliness in a networked embedded system is absolutely essential, what has to change?" The answer, unfortunately, is "nearly everything." Twentieth century computer science has taught us that everything that can be computed can be specified by a Turing machine. "Computation" is accomplished by a terminating sequence of state transformations. This core abstraction underlies the design of nearly all computers, programming languages, and operating systems in use today. But unfortunately, this core abstraction does not fit embedded software very well. This core abstraction fits reasonably well if embedded software is simply "software on small computers." In this view, embedded software differs from other software only in its resource limitations (small memory, small data word sizes, and relatively slow clocks). In this view, the "embedded software problem" is an optimization problem. Solutions emphasize efficiency; engineers write software at a very low level (in assembly code or C), avoid operating systems with a rich suite of services, and use specialized computer architectures such as programmable DSPs and network processors that provide hardware support for common operations. These solutions have defined the practice of embedded software design and development for the last 25 years or so. Of course, thanks to the semiconductor industry's ability to follow Moore's law, the resource limitations of 25 years ago should have almost entirely evaporated today. Why then has embedded software design and development changed so little? It may be that extreme competitive pressure in products based on embedded software, such as consumer electronics, rewards only the most efficient solutions. This argument is questionable, however. There are many examples where functionality has proven more important than efficiency. It is arguable that resource limitations are not the only defining factor for embedded software, and may not even be the principal factor. There are clues that embedded software differs from other software in more fundamental ways. If we examine carefully why engineers write embedded software in assembly code or C, we discover that efficiency is not the only concern, and may not even be the main concern. The reasons may include, for example, the need to count cycles in a critical inner loop, not to make it fast, but rather to make it predictable. No widely used programming language integrates a way to specify timing requirements or constraints. Foundations of Hybrid and Embedded Systems and Software 50 Instead, the abstractions they offer are about scalability (inheritance, dynamic binding, polymorphism, memory management), and, if anything, further obscure timing (consider the impact of garbage collection on timing). Counting cycles, of course, becomes extremely difficult on modern processor architectures, where memory hierarchy (caches), dynamic dispatch, and speculative execution make it nearly impossible to tell how long it will take to execute a particular piece of code. Worse, execution time is context dependent, which leads to unmanageable variability. Still worse, programming languages are almost always Turing complete, and as a consequence, execution time is undecidable in general. Embedded software designers must choose alternative processor architectures such as programmable DSPs, and must use disciplined programming techniques (e.g. avoiding recursion) to get predictable timing. Another reason engineers stick to low-level programming is that embedded software typically has to interact with hardware that is specialized to the application. In conventional software, interaction with hardware is the domain of the operating system. Device drivers are not typically part of an application program, and are not typically created by application designers. But in the embedded software context, generic hardware interfaces are rarer. The fact is that creating interfaces to hardware is not something that higher level languages support. For example, although concurrency is not uncommon in modern programming languages (consider threads in Java), no widely used programming language includes in its semantics the notion of interrupts. Yet the concept is not difficult, and it can be built into programming languages (consider for example nesC and TinyOS, which are widely used for programming sensor networks). It becomes apparent that the avoidance of so many recent improvements in computation is not due to ignorance of those improvements. It is due to a mismatch of the core abstractions and the technologies built on those core abstractions. In embedded software, time matters. In the 20th century abstractions of computing, time is irrelevant. In embedded software, concurrency and interaction with hardware are intrinsic, since embedded software engages the physical world in non-trivial ways (more than keyboards and screens). The most influential 20th century computing abstractions speak only weakly about concurrency, if at all. Even the core 20th century notion of "computable" is at odds with the requirements of embedded software. In this notion, useful computation terminates, but termination is undecidable. In embedded software, termination is failure, and yet to get predictable timing, subcomputations must decidably terminate. Embedded systems are integrations of software and hardware where the software reacts to sensor data and/or issues commands to actuators. The physical system is an integral part of the design and the software must be conceptualized to operate in concert with that physical system. Physical systems are intrinsically concurrent and temporal. Actions and reactions happen simultaneously and over time, and the metric properties of time are an essential part of the behavior of the system. Prevailing software methods abstract away time, replacing it with ordering. In imperative languages such as C, C++, and Java, the order of actions is defined by the program, but not their timing. This prevailing imperative abstraction is overlaid with another, that of threads or processes, typically provided by the operating system, but occasionally by the language (as in Java). Foundations of Hybrid and Embedded Systems and Software 51 The lack of timing in the core abstraction is a flaw, from the perspective of embedded software, and threads as a concurrency model are a poor match for embedded systems. They are mainly focused on providing an illusion of parallelism in fundamentally sequential models, and they work well only for modest levels of concurrency or for highly decoupled systems that are sharing resources, where best-effort scheduling policies are sufficient. Indeed, several recent innovative embedded software frameworks, such as Simulink (from The MathWorks), nesC and TinyOS (from Berkeley), and Lustre/SCADE (from Esterel Technologies) are concurrent programming languages with no threads or processes in the programmer's model. Embedded software systems are generally held to a much higher reliability standard than general purpose software. Often, failures in the software can be life threatening (e.g., in avionics and military systems). The prevailing concurrency model in general purpose software that is based on threads does not achieve adequate reliability. In this prevailing model, interaction between threads is extremely difficult for humans to understand. Although it is arguable that concurrent computation is inherently complex, threads make it far more complex because between any two atomic operations (a concept that is rarely well defined), any part of the state of the system can change. The basic techniques for controlling this interaction use semaphores and mutual exclusion locks, methods that date back to the 1960s. Many uses of these techniques lead to deadlock or livelock. In generalpurpose computing, this is inconvenient, and typically forces a restart of the program (or even a reboot of the machine). However, in embedded software, such errors can be far more than inconvenient. Moreover, software is often written without sufficient use of these interlock mechanisms, resulting in race conditions that yield nondeterministic program behavior. In practice, errors due to misuse (or no use) of semaphores and mutual exclusion locks are extremely difficult to detect by testing. Code can be exercised for years before a design flaw appears. Static analysis techniques can help (e.g. Sun Microsystems' LockLint), but these methods are often thwarted by conservative approximations and/or false positives, and they are not widely used in practice. It can be argued that the unreliability of multi-threaded programs is due at least in part to inadequate software engineering processes. For example, better code reviews, better specifications, better compliance testing, and better planning of the development process can help solve the problems. It is certainly true that these techniques can help. However, programs that use threads can be extremely difficult for programmers to understand. If a program is incomprehensible, then no amount of process improvement will make it reliable. Formal methods can help detect flaws in threaded programs, and in the process can improve the understanding that a designer has of the behavior of a complex program. But if the basic mechanisms fundamentally lead to programs that are difficult to understand, then these improvements will fall short of delivering reliable software. Incomprehensible software will always be unreliable software. Prevailing industrial practice in embedded software relies on bench testing for concurrency and timing properties. This has worked reasonably well, because programs are small, and because the software gets encased in a box with no outside connectivity that can alter the behavior of the software. However, applications today demand that Foundations of Hybrid and Embedded Systems and Software 52 embedded systems be feature-rich and networked, so bench testing and encasing become inadequate. In a networked environment, it becomes impossible to test the software under all possible conditions, because the environment is not known. Moreover, generalpurpose networking techniques themselves make program behavior much more unpredictable. What would it take to achieve concurrent and networked embedded software that was absolutely positively on time (say, to the precision and reliability of digital logic)? Unfortunately, everything would have to change. The core abstractions of computing need to be modified to embrace time. Computer architectures need to be changed to deliver precisely timed behaviors. Networking techniques need to be changed to provide time concurrence. Programming languages have to change to embrace time and concurrency in their core semantics. Operating systems have to change to rely less on priorities to (indirectly) specify timing requirements. Software engineering methods need to change to specify and analyze the temporal dynamics of software. And the traditional boundary between the operating system and the programming language needs to be rethought. What is needed is nearly a reinvention of computer science. Fortunately, there is quite a bit to draw on. To name a few examples, architecture techniques such as software-managed caches promise to deliver much of the benefit of memory hierarchy without the timing unpredictability. Operating systems such as TinyOS provide simple ways to create thin wrappers around hardware, and with nesC, alter the OS/language boundary. Programming languages such as Lustre/SCADE provide understandable and analyzable concurrency. Embedded software languages such as Simulink provide time in their semantics. Network time synchronization methods such as IEEE 1588 provide time concurrence at resolutions (tens of nanoseconds) far finer than any processor or software architectures can deal with today. The time is ripe to pull these techniques together and build the 21st Century (Embedded) Computer Science. Thanks to helpful comments from Elaine Cheong and Douglas Niehaus. [50] Balance between Formal and Informal Methods, Engineering and Artistry, Evolution and Rebuild E. A. Lee, Technical Memorandum UCB/ERL M04/19 /, University of California, Berkeley, July 4, 2004. Abstract: This paper is the result of a workshop entitled "Software Reliability for FCS" that was organized by the Army Research Office, held on May 18-19, 2004, and hosted by: Institute for Software Integrated Systems (ISIS), Vanderbilt University. I was given the charge of leading one of four topic areas, and was assigned the title. This is my summary of the results of the workshop on this topic. This paper is the result of a workshop entitled "Software Reliability for FCS" that was organized by the Army Research Office, held on May 18-19, 2004, and hosted by: Institute for Software Integrated Systems (ISIS), Vanderbilt University. I was given the charge of leading one of four topic areas, and was assigned the title. This is my summary of the results of the workshop on this topic. It may well be that established approaches to software engineering will not be sufficient to avert a software disaster in FCS and similarly ambitious, software-intensive efforts. This topic examines the tension between informal methods, particularly those that focus on the human, creative process of software engineering and the management of that Foundations of Hybrid and Embedded Systems and Software 53 process, and formal methods, specifically those that rely on mathematically rooted systems theories and semantic frameworks. It is arguable that, as these approaches are construed today by their respective (largely disjoint) research communities, neither offers much hope of delivering reliable FCS software. Although certainly these communities have something to offer, the difficulties may be more deeply rooted than either approach can address. In this workshop, we took an aggressive stand that there are problems in software that are intrinsically unsolvable with today's software technology. This stand asserts that no amount of process will fix the problems because the problems are not with the process, and that today's formal techniques cannot solve the problem as long as they remain focused on formalizing today's software technologies. A sea change in the underlying software technology could lead to more effective informal and formal methods. What form could that take? [51] Concurrent Models of Computation for Embedded Software E. A. Lee, Technical Memorandum, University of California, Berkeley, UCB/ERL M05/2/, January 4, 2005. Abstract: This document collects the lecture notes that I used when teaching EECS 290n in the Fall of 2004. This course is an advanced graduate course with a nominal title of Advanced Topics in Systems Theory. This instance of the course studies models of computation used for the specification and modeling of concurrent real-time systems, particularly those with relevance to embedded software. Current research and industrial approaches are considered, including real-time operating systems, process networks, synchronous languages (such as used in SCADE, Esterel, and Statecharts), timed models (such as used in Simulink, Giotto, VHDL, and Verilog), and dataflow models (such as a used in Labview and SPW). The course combines an experimental approach with a study of formal semantics. The objective is to develop a deep understanding of the wealth of alternative approaches to managing concurrency and time in software. This document collects the lecture notes that I used when teaching EECS 290n in the Fall of 2004. This course is an advanced graduate course with a nominal title of Advanced Topics in Systems Theory. This instance of the course studies models of computation used for the specification and modeling of concurrent real-time systems, particularly those with relevance to embedded software. Current research and industrial approaches are considered, including real-time operating systems, process networks, synchronous languages (such as used in SCADE, Esterel, and Statecharts), timed models (such as used in Simulink, Giotto, VHDL, and Verilog), and dataflow models (such as a used in Labview and SPW). The course combines an experimental approach with a study of formal semantics. The objective is to develop a deep understanding of the wealth of alternative approaches to managing concurrency and time in software. The experimental portion of the course uses Ptolemy II as the software laboratory. The formal semantics portion of the course builds on the mathematics of partially ordered sets, particularly as applied to prefix orders and Scott orders. It develops a framework for models of computation for concurrent systems that uses partially ordered tags associated with events. Discrete-event models, synchronous/reactive languages, dataflow models, and process networks are studied in this context. Basic issues of computability, boundedness, determinacy, liveness, and the modeling of time are studied. Classes of functions over partial orders, including continuous, monotonic, stable, and sequential functions are considered, as are semantics based on fixed-point theorems. [52] What are the Key Challenges in Embedded Software? Foundations of Hybrid and Embedded Systems and Software 54 E. A. Lee, Guest Editorial in System Design Frontier, Shanghai Hometown Microsystems Inc., Volume 2, Number 1, January 2005. Abstract: Embedded software has traditionally been thought of as "software on small computers." In this traditional view, the principal problem is resource limitations (small memory, small data word sizes, and relatively slow clocks). Solutions emphasize efficiency; software is written at a very low level (in assembly code or C), operating systems with a rich suite of services are avoided, and specialized computer architectures such as programmable DSPs and network processors are developed to provide hardware support for common operations. These solutions have defined the practice of embedded software design and development for the last 25 years or so. Embedded software has traditionally been thought of as "software on small computers." In this traditional view, the principal problem is resource limitations (small memory, small data word sizes, and relatively slow clocks). Solutions emphasize efficiency; software is written at a very low level (in assembly code or C), operating systems with a rich suite of services are avoided, and specialized computer architectures such as programmable DSPs and network processors are developed to provide hardware support for common operations. These solutions have defined the practice of embedded software design and development for the last 25 years or so. Of course, thanks to the semiconductor industry's ability to follow Moore's law, the resource limitations of 25 years ago should have almost entirely evaporated today. Why then has embedded software design and development changed so little? It may be that extreme competitive pressure in products based on embedded software, such as consumer electronics, rewards only the most efficient solutions. This argument is questionable, however, since there are many examples where functionality has proven more important than efficiency. We will argue that resource limitations are not the only defining factor for embedded software, and may not even be the principal factor. Resource limitations are an issue to some degree with almost all software. So generic improvements in software engineering should, in theory, also help with embedded software. There are several hints, however, that embedded software is different in fundamental ways. For one, object-oriented techniques such as inheritance, dynamic binding, and polymorphism are rarely used in practice with embedded software development. In another example, processors used for embedded systems often avoid the memory hierarchy techniques that are used in general purpose processors to deliver large virtual memory spaces and faster execution using caches. In a third example, automated memory management, with allocation, deallocation, and garbage collection, are largely avoided in embedded software. To be fair, there are some successful applications of these technologies in embedded software, such as the use of Java in cell phones, but their application remains limited and is largely providing services that are actually more akin to general-purpose software applications (such as database services in cell phones). Embedded systems are integrations of software and hardware where the software reacts to sensor data and issues commands to actuators. The physical system is an integral part of the design and the software must be conceptualized to operate in concert with that physical system. Physical systems are intrinsically concurrent and temporal. Actions and reactions happen simultaneously and over time, and the metric properties of time are an essential part of the behavior of the system. Prevailing software methods abstract away time, replacing it with ordering. In imperative languages such as C, C++, and Java, the order of actions is defined by the program, but not their timing. This prevailing imperative abstraction is overlaid with another, that of threads or processes, typically provided by the operating system, but occasionally by the language (as in Java). Foundations of Hybrid and Embedded Systems and Software 55 The lack of timing in the core abstraction is a flaw, from the perspective of embedded software, and threads as a concurrency model are a poor match to embedded systems. They are mainly focused on providing an illusion of concurrency in fundamentally sequential models, and they work well only for modest levels of concurrency or for highly decoupled systems that are sharing resources, where best-effort scheduling policies are sufficient. Indeed, several recent innovative embedded software frameworks, such as Simulink (from the MathWorks), TinyOS (from Berkeley), and SCADE (from Esterel Technologies) have no threads or processes. Embedded software systems are generally held to a much higher reliability standard than general purpose software. Often, failures in the software can be life threatening (e.g., in avionics and military systems). The prevailing concurrency model based on threads does not achieve adequate reliability. In this prevailing model, interaction between threads is extremely difficult for humans to understand. The basic techniques for controlling this interaction use semaphores and mutual exclusion locks, methods that date back to the 1960s. These techniques often lead to deadlock or livelock conditions, where all or part of a program cannot continue executing. In general-purpose computing, this is inconvenient, and typically forces a restart of the program (or even a reboot of the machine). However, in embedded software, such errors can be far more than inconvenient. Moreover, software is often written without sufficient use of these interlock mechanisms, resulting in race conditions that yield nondeterministic program behavior. In practice, errors due to misuse (or no use) of semaphores and mutual exclusion locks are extremely difficult to detect by testing. Code can be exercised for years before a design flaw appears. Static analysis techniques can help (e.g. Sun Microsystems' LockLint), but these methods are often thwarted by conservative approximations and/or false positives, and they are not widely used in practice. It can be argued that the unreliability of multi-threaded programs is due at least in part to inadequate software engineering processes. For example, better code reviews, better specifications, better compliance testing, and better planning of the development process can help solve the problems. It is certainly true that these techniques can help. However, programs that use threads can be extremely difficult for programmers to understand. If a program is incomprehensible, then no amount of process improvement will make it reliable. Formal methods can help detect flaws in threaded programs, and in the process can improve the understanding that a designer has of the behavior of a complex program. But if the basic mechanisms fundamentally lead to programs that are difficult to understand, then these improvements will fall short of delivering reliable software. The key challenge in embedded software is to invent (or apply) abstractions that yield more understandable programs that are both concurrent and timed. These abstractions will be very different from those widely used for the design and development of generalpurpose software. [53] Classes and Subclasses in Actor-Oriented Design Foundations of Hybrid and Embedded Systems and Software 56 E. A. Lee, S. Neuendorffer, invited paper, Conference on Formal Methods and Models for Codesign/ (MEMOCODE), San Diego, CA, USA, June 22-25, 2004. Abstract: Actor-oriented languages provide a component composition methodology that emphasizes concurrency. The interfaces to actors are parameters and ports (vs. members and methods in object-oriented languages). Actors interact with one another through their ports via a messaging schema that can follow any of several concurrent semantics (vs. procedure calls, with prevail in OO languages). Domain-specific actor-oriented languages and frameworks are common (e.g. Simulink, LabVIEW, and many others). However, they lack many of the modularity and abstraction mechanisms that programmers have become accustomed to in OO languages, such as classes, inheritance, interfaces, and polymorphism. This extended abstract shows the form that such mechanisms might take in AO languages. A prototype of these mechanisms realized in Ptolemy II is described. Actor-oriented languages provide a component composition methodology that emphasizes concurrency. The interfaces to actors are parameters and ports (vs. members and methods in object-oriented languages). Actors interact with one another through their ports via a messaging schema that can follow any of several concurrent semantics (vs. procedure calls, with prevail in OO languages). Domain-specific actor-oriented languages and frameworks are common (e.g. Simulink, LabVIEW, and many others). However, they lack many of the modularity and abstraction mechanisms that programmers have become accustomed to in OO languages, such as classes, inheritance, interfaces, and polymorphism. This extended abstract shows the form that such mechanisms might take in AO languages. A prototype of these mechanisms realized in Ptolemy II is described. [54] Concurrent Models of Computation for Embedded Software E. A. Lee, S. Neuendorffer, Technical Memorandum UCB/ERL M04/26/, University of California, Berkeley, July 22, 2004. Abstract: The prevailing abstractions for software are better suited to the traditional problem of computation, namely transformation of data, than to the problems of embedded software. These abstractions have weak notions of concurrency and the passage of time, which are key elements of embedded software. Innovations such as nesC/TinyOS (developed for programming very small programmable sensor nodes called motes), Click (created to support the design of software-based network routers), Simulink with Real-Time Workshop (created for embedded control software), and Lustre/SCADE (created for safety-critical embedded software) offer abstractions that address some of these issues and differ significantly from the prevailing abstractions in software engineering. This paper surveys some of the abstractions that have been explored. The prevailing abstractions for software are better suited to the traditional problem of computation, namely transformation of data, than to the problems of embedded software. These abstractions have weak notions of concurrency and the passage of time, which are key elements of embedded software. Innovations such as nesC/TinyOS (developed for programming very small programmable sensor nodes called motes), Click (created to support the design of software-based network routers), Simulink with Real-Time Workshop (created for embedded control software), and Lustre/SCADE (created for safety-critical embedded software) offer abstractions that address some of these issues and differ significantly from the prevailing abstractions in software engineering. This paper surveys some of the abstractions that have been explored. [55] Operational Semantics of Hybrid Systems E. A. Lee, H. Zheng, invited paper, In Proc. Hybrid Systems: Computation and Control (HSCC) LNCS TBD, Zurich, Switzerland, March 9-11, 2005. Abstract: This paper discusses an interpretation of hybrid systems as executable models. A specification of a hybrid system for this purpose can be viewed as a program in a domain-specific programming language. We describe the semantics of HyVisual, which is such a domain-specific programming language. The semantic properties of such a language affect our ability to understand, execute, and analyze a model. We discuss several semantic issues that come in defining such a programming language, such as the interpretation of discontinuities in continuous-time signals, and the interpretation of discrete-event signals in hybrid systems, and the consequences of numerical ODE solver techniques. We describe the solution in HyVisual by giving its operational semantics. This paper discusses an interpretation of hybrid systems as executable models. A specification of a hybrid system for this purpose can be viewed as a program in a domain-specific programming language. We describe the semantics of HyVisual, which is such a domain-specific programming language. The semantic properties of such a language affect our ability to understand, execute, and analyze a model. We discuss several semantic issues that come in defining such a programming language, such as the interpretation of discontinuities in continuous-time signals, and the interpretation of discrete-event signals in hybrid systems, and the consequences of numerical ODE solver techniques. We describe the solution in HyVisual by giving its operational semantics. [56] Scientific Workflow Management and the KEPLER System Foundations of Hybrid and Embedded Systems and Software 57 B. Ludäscher, I. Altintas, C. Berkley, D. Higgins, E. Jaeger, M. Jones, E. A. Lee, J. Tao, Y. Zhao, Concurrency & Computation: Practice & Experience, (in publication), draft version, March 2005. Abstract: Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery "pipelines." A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational and community infrastrucutre (a.k.a. "the Grid"). However, this infrasctructure is only a means to an end and scientists ideally should be bothered little with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on KEPLER, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of KEPLER and its underlying PTOLEMY II system, planned extensions, and areas of future research. KEPLER is a community-driven, open source project, and we always welcome related projects and new contributors to join. Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery "pipelines." A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational and community infrastrucutre (a.k.a. "the Grid"). However, this infrasctructure is only a means to an end and scientists ideally should be bothered little with its existence. The goal is for scientists to focus on development and use of what we call scientific workflows. These are networks of analytical steps that may involve, e.g., database access and querying steps, data analysis and mining steps, and many other steps including computationally intensive jobs on high performance cluster computers. In this paper we describe characteristics of and requirements for scientific workflows as identified in a number of our application projects. We then elaborate on KEPLER, a particular scientific workflow system, currently under development across a number of scientific data management projects. We describe some key features of KEPLER and its underlying PTOLEMY II system, planned extensions, and areas of future research. KEPLER is a community-driven, open source project, and we always welcome related projects and new contributors to join. [57] Automatic Verification of Component-based Real-time CORBA Applications G. Madl, S. Abdelwahed, G. Karsai, In Proc. 25th IEEE International Real-Time Systems Symposium (RTSS'04), Lisbon, Portugal, Dec. 2004, pp. 231—240. Abstract: Distributed real-time embedded (DRE) systems often need to satisfy various time, resource and faulttolerance constraints. To manage the complexity of scheduling these systems many methods use Rate Monotonic Scheduling assuming a time-triggered architecture. This paper presents a method that captures the reactive behavior of complex timeand event-driven systems, can provide simulation runs and can provide exact characterization of timed properties of component-based DRE applications that use the publisher/subscriber communication pattern. We demonstrate our approach on real-time CORBA avionics applications. Distributed real-time embedded (DRE) systems often need to satisfy various time, resource and faulttolerance constraints. To manage the complexity of scheduling these systems many methods use Rate Monotonic Scheduling assuming a time-triggered architecture. This paper presents a method that captures the reactive behavior of complex timeand event-driven systems, can provide simulation runs and can provide exact characterization of timed properties of component-based DRE applications that use the publisher/subscriber communication pattern. We demonstrate our approach on real-time CORBA avionics applications. [58] Verifying Distributed Real-Time Properties of Embedded Systems via Graph Transformations and Model Checking G. Madl, S. Abdelwahed, D., Real-Time Systems Journal, (in publication), 2005. Abstract: quality of service (QoS) needs of distributed real-time embedded (DRE) systems. However, component middleware also introduces challenges for DRE system developers, such as evaluating the predictability of DRE system behavior and choosing the right design alternatives before committing to a specific platform. Model-based technologies help address these issues by enabling design-time analysis and providing the means to automate the development, configuration, and integration of component-based quality of service (QoS) needs of distributed real-time embedded (DRE) systems. However, component middleware also introduces challenges for DRE system developers, such as evaluating the predictability of DRE system behavior and choosing the right design alternatives before committing to a specific platform. Model-based technologies help address these issues by enabling design-time analysis and providing the means to automate the development, configuration, and integration of component-based Foundations of Hybrid and Embedded Systems and Software 58 DRE systems. This paper provides three contributions to research on model-based design and analysis of component-based DRE systems. First, we apply model checking techniques to DRE design models using model transformations to verify key QoS properties of component-based DRE systems developed using Real-time CORBA. Second, we implemented a property-preserving model-transformation method and used it to define formal semantics to component-based modeling languages. Third, we develop a formal description of the Boeing Bold Stroke architecture from which abstract behavioral models can be constructed and verified. Our results show that model-based techniques enable design-time analysis of timed properties and can be applied to effectively predict, simulate , and verify the event-driven behavior of component-based DRE systems. [59] Shooter Localization in Urban Terrain M. Maroti, G. Simon, A. Ledeczi, J. Sztipanovits, IEEE Computer, pp. 60-61, August, 2004. Abstract: The paper describes PinPtr, an acoustic sensor network-based shooter localization system. Instead of using a few expensive acoustic sensors, a low-cost ad-hoc acoustic sensor network measures both the muzzle blast and shock wave to accurately determine the location of the shooter and the trajectory of the bullet. The basic idea is simple: using the arrival times of the acoustic events at different sensor locations, the shooter position is calculated using the speed of sound and the location of the sensors. The robust sensor fusion algorithm, which is running on the base station, is based on a search on a hyper-surface defined by a consistency function The consistency function, which provides the number of sensor measurements consistent with hypothetical shooter positions and shot time, automatically classifies measurements and eliminates those, which are erroneous or the result of multipath effects. The highly redundant ad-hoc sensor field ensures that enough good measurements are left to determine the shooter's position. The global maximum of the surface, corresponding to the shooter position, is guaranteed to be found by a fast search algorithm. The paper describes PinPtr, an acoustic sensor network-based shooter localization system. Instead of using a few expensive acoustic sensors, a low-cost ad-hoc acoustic sensor network measures both the muzzle blast and shock wave to accurately determine the location of the shooter and the trajectory of the bullet. The basic idea is simple: using the arrival times of the acoustic events at different sensor locations, the shooter position is calculated using the speed of sound and the location of the sensors. The robust sensor fusion algorithm, which is running on the base station, is based on a search on a hyper-surface defined by a consistency function The consistency function, which provides the number of sensor measurements consistent with hypothetical shooter positions and shot time, automatically classifies measurements and eliminates those, which are erroneous or the result of multipath effects. The highly redundant ad-hoc sensor field ensures that enough good measurements are left to determine the shooter's position. The global maximum of the surface, corresponding to the shooter position, is guaranteed to be found by a fast search algorithm. [60] Fault Tolerant Data Flow Modeling Using the Generic Modeling Environment M. L. McKelvin, Jr, J. Sprinkle, C. Pinello, A. Sangiovanni-Vincentelli, 12th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems, Greenbelt, Maryland, Apr. 4--5, 2005, pp. 229–235. Abstract: Designing embedded software for safety-critical, real-time feedback control applications is a complex and error prone task. Fault tolerance is an important aspect of safety. In general, fault tolerance is achieved by duplicating hardware components, a solution that is often more expensive than needed. In particular applications, such as automotive electronics, a subset of the functionalities has to be guaranteed while others are not crucial to the safety of the operation of the vehicle. In this case, we must make sure that this subset is operational under the potential faults of the architecture. A model of computation called Fault-Tolerant Data Flow (FTDF) was recently introduced to describe at the highest level of abstraction of the design the fault tolerance requirements on the functionality of the system. Then, the problem of implementing the system efficiently on a platform consists of finding a mapping of the FTDF model on the Designing embedded software for safety-critical, real-time feedback control applications is a complex and error prone task. Fault tolerance is an important aspect of safety. In general, fault tolerance is achieved by duplicating hardware components, a solution that is often more expensive than needed. In particular applications, such as automotive electronics, a subset of the functionalities has to be guaranteed while others are not crucial to the safety of the operation of the vehicle. In this case, we must make sure that this subset is operational under the potential faults of the architecture. A model of computation called Fault-Tolerant Data Flow (FTDF) was recently introduced to describe at the highest level of abstraction of the design the fault tolerance requirements on the functionality of the system. Then, the problem of implementing the system efficiently on a platform consists of finding a mapping of the FTDF model on the Foundations of Hybrid and Embedded Systems and Software 59 components of the platform. A complete design flow for this kind of application requires a user-friendly graphical interface to capture the functionality of the systems with the FTDF model, algorithms for choosing an architecture optimally, (possibly automatic) code generation for the parts of the system to be implemented in software and verification tools. In this paper, we use the Generic Modeling Environment (GME) developed at Vanderbilt University to design a graphical design capture system and to provide the infrastructure for automatic code generation. The design flow is embedded into the Metropolis environment developed at the University of California at Berkeley to provide the necessary verification and analysis framework. [61] A Visual Language for Describing Instruction Sets and Generating Decoders T. Meyerowitz, J. Sprinkle, A. Sangiovanni-Vincentelli, 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), Vancouver, BC, Oct., 25, 2004, pp. 23–32. Abstract: We detail the syntax and semantics of ISA_ML, a visual modeling language for describing Instruction Set Architectures of microprocessors, and an accompanying tool that takes a description in the language and generates decoders from it in the form of a disassembler and a micro-architectural trace interfacer. The language and tool were built using the Generic Modeling Environment (GME), and leverage the concepts of meta-modeling to increase productivity and to provide extensive error checking to the modeler. Using this tool, we were able to construct a model of significant subsets of the MIPS, ARM, and PowerPC instruction sets each in 8 hours or less. This language can be retargeted for other purposes, such as generating synthesizable instruction decoders. We detail the syntax and semantics of ISA_ML, a visual modeling language for describing Instruction Set Architectures of microprocessors, and an accompanying tool that takes a description in the language and generates decoders from it in the form of a disassembler and a micro-architectural trace interfacer. The language and tool were built using the Generic Modeling Environment (GME), and leverage the concepts of meta-modeling to increase productivity and to provide extensive error checking to the modeler. Using this tool, we were able to construct a model of significant subsets of the MIPS, ARM, and PowerPC instruction sets each in 8 hours or less. This language can be retargeted for other purposes, such as generating synthesizable instruction decoders. [62] CCured: Type-Safe Retrofitting of Legacy Software G. C. Necula, J. Condit, M. Harren, S. McPeak, W. Weimer, ACM Transactions on Programming Languages and Systems, vol. 27, no. 3, May, 2005. Abstract: This article describes CCured, a program transformation system that adds type safety guarantees to existing C programs. CCured attempts to verify statically that memory errors cannot occur, and it inserts run-time checks where static verification is insufficient. CCured extends C's type system by separating pointer types according to their usage, and it uses a surprisingly simple type inference algorithm that is able to infer the appropriate pointer kinds for existing C programs. CCured uses physical subtyping to recognize and verify a large number of type casts at compile time. Additional type casts are verified using run-time type information. CCured uses two instrumentation schemes, one that is optimized for performance and one in which metadata is stored in a separate data structure whose shape mirrors that of the original user data. This latter scheme allows instrumented programs to invoke external functions directly on the program's data without the use of a wrapper function. We have used CCured on real-world securitycritical network daemons to produce instrumented versions without memory-safety vulnerabilities, and we have found several bugs in these programs. The instrumented code is efficient enough to be used in day-to-day operations. This article describes CCured, a program transformation system that adds type safety guarantees to existing C programs. CCured attempts to verify statically that memory errors cannot occur, and it inserts run-time checks where static verification is insufficient. CCured extends C's type system by separating pointer types according to their usage, and it uses a surprisingly simple type inference algorithm that is able to infer the appropriate pointer kinds for existing C programs. CCured uses physical subtyping to recognize and verify a large number of type casts at compile time. Additional type casts are verified using run-time type information. CCured uses two instrumentation schemes, one that is optimized for performance and one in which metadata is stored in a separate data structure whose shape mirrors that of the original user data. This latter scheme allows instrumented programs to invoke external functions directly on the program's data without the use of a wrapper function. We have used CCured on real-world securitycritical network daemons to produce instrumented versions without memory-safety vulnerabilities, and we have found several bugs in these programs. The instrumented code is efficient enough to be used in day-to-day operations. Foundations of Hybrid and Embedded Systems and Software 60 [63] Model-Integrated Computing for Heterogeneous Systems S. Neema, A. Dixon, T. Bapty, J. Sztipanovits, In Proc. International Conference on Computing, Communications and Control Technologies (CCCT'04), Austin, TX, August 14-17, 2004. Abstract: Modern embedded and networked embedded system applications are demanding very high performance from systems with minimal resources. These applications must also be flexible to operate in a rapidly changing environment. High performance with limited resources needs application-specific architectures, while flexibility requires adaptation capabilities. Design of these systems creates unique challenges, since the traditional decomposition of the design space to hardware and software components and to functional and non-functional requirements do not give acceptable performance. Model-Integrated Computing (MIC) is an emerging design technology, which integrates all essential aspects of system design in a general, but highly customizable framework. This paper provides an overview of MIC and shows its application in the design of reconfigurable processing system. Modern embedded and networked embedded system applications are demanding very high performance from systems with minimal resources. These applications must also be flexible to operate in a rapidly changing environment. High performance with limited resources needs application-specific architectures, while flexibility requires adaptation capabilities. Design of these systems creates unique challenges, since the traditional decomposition of the design space to hardware and software components and to functional and non-functional requirements do not give acceptable performance. Model-Integrated Computing (MIC) is an emerging design technology, which integrates all essential aspects of system design in a general, but highly customizable framework. This paper provides an overview of MIC and shows its application in the design of reconfigurable processing system. [64] Actor-Oriented Metaprogramming S. Neuendorffer, PhD Thesis, University of California, Berkeley, December 21, 2004. Abstract: Robust design of concurrent systems is important in many areas of engineering, from embedded systems to scientific computing. Designing such systems using dataflow-oriented models can expose large amounts of concurrency to system implementation. Utilizing this concurrency effectively enables distributed execution and increased throughput, or reduced power usage at the same throughput. Code generation can then be used to automatically transform the design into an implementation, allowing design refactoring at the dataflow level and reduced design time over hand implementation. Robust design of concurrent systems is important in many areas of engineering, from embedded systems to scientific computing. Designing such systems using dataflow-oriented models can expose large amounts of concurrency to system implementation. Utilizing this concurrency effectively enables distributed execution and increased throughput, or reduced power usage at the same throughput. Code generation can then be used to automatically transform the design into an implementation, allowing design refactoring at the dataflow level and reduced design time over hand implementation. This thesis focuses particularly on the benefits and disadvantages that arise when constructing models from generic, parameterized, dataflow-oriented components called actors. A designer can easily reuse actors in different models with different parameter values, data types, and interaction semantics. Additionally, during execution of a model actors can be reconfigured by changing their connections or assigning new parameter values. This form of reconfiguration can conveniently represent adaptive systems, systems with multiple operating modes, systems without fixed structure, and systems that control other systems. Ptolemy II is a Java-based design environment that supports the construction and execution of hierarchical, reconfigurable models using actors. Unfortunately, allowing unconstrained reconfiguration of actors can sometimes cause problems. If a model is reconfigured, it may no longer accurately represent the system being modeled. Reconfiguration may prevent the application of static scheduling analysis to improve execution performance. In systems with data type parameters, reconfiguration may prevent static analysis of data types, eliminating an important form of error Foundations of Hybrid and Embedded Systems and Software 61 detection. In such cases, it is therefore useful to limit which parameters or structures in a model can be reconfigured, or when during execution reconfiguration can occur. This thesis describes a reconfiguration analysis that determines when reconfiguration occurs in a hierarchical model. Given appropriate formulated constraints, the analysis can alert a designer to potential design problems. The analysis is based on a mathematical framework for approximately describing periodic points in the behavior of a model. This framework has a lattice structure that reflects the hierarchical structure of actors in a model. Because of the lattice structure of the framework, this analysis can be performed efficiently. Models of two different systems are presented where this analysis helps verify that reconfiguration does not violate the assumptions of the model. Run-time reconfiguration of actors not only presents difficulties for a system modeler, but can also impede efficient system implementation. In order to support run-time reconfiguration of actors in Java, Ptolemy II introduces extra levels of indirection into many operations. The overhead from this indirection is incurred in all models, even if a particular model does not use reconfiguration. In order to remove the indirection overhead, we have developed a system called Copernicus which transforms a Ptolemy II model into self-contained Java code. In performing this transformation the Java code for each actor is specialized to its usage in a particular model. As a result, indirection overhead only remains in the generated code if it is required by reconfiguration in the model. The specialization is guided by various types of static analysis, including data type analysis and analysis of reconfiguration. In certain cases, the generated code runs 100 times faster and with almost no memory allocation, compared to the same model running in a Ptolemy II simulation. For small examples, performance close to handwritten Java code has been achieved. [65] Modeling Real-World Control Systems: Beyond Hybrid Systems S. Neuendorffer, In Proc. Winter Simulation Conference (WSC), Washington, DC, USA, December 5--8, 2004. Abstract: Hybrid system modeling refers to the construction of system models combining both continuous and discrete dynamics. These models can greatly reduce the complexity of a physical system model by abstracting some of the continuous dynamics of the system into discrete dynamics. Hybrid system models are also useful for describing the interaction between physical processes and computational processes, such as in a digital feedback control system. Unfortunately, hybrid system models poorly capture common software architecture design patterns, such as threads, mobile code, safety, and hardware interfaces. Dealing effectively with these practical software issues is crucial when designing real-world systems. This paper presents a model of a complex control system that combines continuous-state physical system models with rich discrete-state software models in a disciplined fashion. We show how expressive modeling using multiple semantics can be used to address the design difficulties in such a system. Hybrid system modeling refers to the construction of system models combining both continuous and discrete dynamics. These models can greatly reduce the complexity of a physical system model by abstracting some of the continuous dynamics of the system into discrete dynamics. Hybrid system models are also useful for describing the interaction between physical processes and computational processes, such as in a digital feedback control system. Unfortunately, hybrid system models poorly capture common software architecture design patterns, such as threads, mobile code, safety, and hardware interfaces. Dealing effectively with these practical software issues is crucial when designing real-world systems. This paper presents a model of a complex control system that combines continuous-state physical system models with rich discrete-state software models in a disciplined fashion. We show how expressive modeling using multiple semantics can be used to address the design difficulties in such a system. [66] Automated Task Allocation for Network Processors Foundations of Hybrid and Embedded Systems and Software 62 W. Plishker, K. Ravindran, N. Shah, K. Keutzer. In Proc. Network System Design Conference, October, 2004, pp. 235-245. Abstract: Network processors have great potential to combine high performance with increased flexibility. These multiprocessor systems consist of programmable elements, dedicated logic, and specialized memory and interconnection networks. However, the architectural complexity of the systems makes programming difficult. Programmers must be able to productively implement high performance applications for network processors to succeed. Ideally, designers describe applications in a domain specific language (DSL). DSLs expedite the development process by providing component libraries, communication and computation semantics, visualization tools, and test suites for an application domain. An integral aspect of mapping applications described in a DSL to network processors is allocating computational tasks to processing elements. We formulate this task allocation problem for a popular network processor, the Intel IXP1200. This method proves to be computationally efficient and produces results that are within 5% of aggregate egress bandwidths achieved by hand-tuned implementations on two representative applications: IPv4 forwarding and DiffServ. Network processors have great potential to combine high performance with increased flexibility. These multiprocessor systems consist of programmable elements, dedicated logic, and specialized memory and interconnection networks. However, the architectural complexity of the systems makes programming difficult. Programmers must be able to productively implement high performance applications for network processors to succeed. Ideally, designers describe applications in a domain specific language (DSL). DSLs expedite the development process by providing component libraries, communication and computation semantics, visualization tools, and test suites for an application domain. An integral aspect of mapping applications described in a DSL to network processors is allocating computational tasks to processing elements. We formulate this task allocation problem for a popular network processor, the Intel IXP1200. This method proves to be computationally efficient and produces results that are within 5% of aggregate egress bandwidths achieved by hand-tuned implementations on two representative applications: IPv4 forwarding and DiffServ. [67] Designing Distributed Diagnosers for Complex Physical Systems I. Roychoudhury, G. Biswas, X. Koutsoukos and S. Abdelwahed, Intl. Workshop on Principles of Diagnosis (DX-05), (in publication) Monterey, CA, June 2005. Abstract: Online diagnosis methods require large computationally expensive diagnosis tasks to be decomposed into sets of smaller tasks so that time and space complexity constraints are not violated. This paper defines the distributed diagnosis problem in the Transcend qualitative diagnosis framework, and then develops heuristic algorithms for generating a set of local diagnosers that solve the global diagnosis problem without a coordinator. Two versions of the algorithm are discussed. The time complexity and optimality of these algorithms are compared and validated through experimental results. Online diagnosis methods require large computationally expensive diagnosis tasks to be decomposed into sets of smaller tasks so that time and space complexity constraints are not violated. This paper defines the distributed diagnosis problem in the Transcend qualitative diagnosis framework, and then develops heuristic algorithms for generating a set of local diagnosers that solve the global diagnosis problem without a coordinator. Two versions of the algorithm are discussed. The time complexity and optimality of these algorithms are compared and validated through experimental results. [68] A Distributed Algorithm for Acoustic Localization Using a Distributed Sensor Network P. Schmidt, I. Amundson, K.D. Frampton, Journal of the Acoustical Society of America, Vol. 115, No. 5, Pt. 2, pp. 2578, 2004. Abstract: An acoustic source localization algorithm has been developed for use with large scale sensor networks using a decentralized computing approach. This algorithm, based on a time delay of arrival (TDOA) method, uses information from the minimum number of sensors necessary for an exactly determined solution. Since the algorithm is designed to run on computational devices with limited memory and speed, the complexity of the computations has been intentionally limited. The sensor network consists of an array of battery operated COTS Ethernet ready embedded systems with an integrated microphone as a sensor. All solutions are calculated as distinct values, and the same TDOA method used for solution is applied for ranking the accuracy of an individual solution. Repeated for all combinations of sensor nodes, solutions with accuracy An acoustic source localization algorithm has been developed for use with large scale sensor networks using a decentralized computing approach. This algorithm, based on a time delay of arrival (TDOA) method, uses information from the minimum number of sensors necessary for an exactly determined solution. Since the algorithm is designed to run on computational devices with limited memory and speed, the complexity of the computations has been intentionally limited. The sensor network consists of an array of battery operated COTS Ethernet ready embedded systems with an integrated microphone as a sensor. All solutions are calculated as distinct values, and the same TDOA method used for solution is applied for ranking the accuracy of an individual solution. Repeated for all combinations of sensor nodes, solutions with accuracy Foundations of Hybrid and Embedded Systems and Software 63 equivalent to complex array calculations are obtainable. Effects of sensor placement uncertainty and multipath propagation are quantified and analyzed, and a comparison to results obtained in the field with a large array and a centralized computing capability using a complex, memory intensive algorithm is included. [69] Optimal Control for a class of Stochastic Hybrid Systems L. Shi, A. Abate, S. Sastry, In Proc. International Conference on Decision and Control, Atlantis, Bahamas, December 2004. Abstract: In this paper, an optimal control problem over a “hybrid Markov Chain” (hMC) is studied. A hMC can be thought of as a traditional MC with continuous time dynamics pertaining to each node; from a different perspective, it can be regarded as a class of hybrid system with random discrete switches induced by an embedded MC. As a consequence of this setting, the index to be maximized, which depends on the dynamics, is the expected value of a non deterministic cost function. After obtaining a closed form for the objective function, we gradually suggest how to device a computationally tractable algorithm to get to the optimal value. Furthermore, the complexity and rate of convergence of the algorithm is analyzed. Proofs and simulations of our results are provided; moreover, an applicative and motivating example is introduced. In this paper, an optimal control problem over a “hybrid Markov Chain” (hMC) is studied. A hMC can be thought of as a traditional MC with continuous time dynamics pertaining to each node; from a different perspective, it can be regarded as a class of hybrid system with random discrete switches induced by an embedded MC. As a consequence of this setting, the index to be maximized, which depends on the dynamics, is the expected value of a non deterministic cost function. After obtaining a closed form for the objective function, we gradually suggest how to device a computationally tractable algorithm to get to the optimal value. Furthermore, the complexity and rate of convergence of the algorithm is analyzed. Proofs and simulations of our results are provided; moreover, an applicative and motivating example is introduced. [70] Generative Components for Hybrid Systems Tools J. Sprinkle, Journal of Object Technology, vol. 4, no. 3, April 2005, Special issue: 6th GPCE Young Researchers Workshop 2004, pp. 35-39, Available at http://www.jot.fm/issues/issue_2005_04/article5 Abstract: Generative techniques, while normally associated with programming languages or code generation, may also be used to produce non-executable artifacts (e.g., configuration or toolchain artifacts). Coupled with domain-specific modeling, generative techniques provide a way to consolidate toolchains and complex domains into a relatively compact space, by providing generators that produce artifacts in the appropriate semantic domain. This paper describes the motivation and usage of one such environment, in the domain of hybrid systems, as well as discussion and goals for future research. Generative techniques, while normally associated with programming languages or code generation, may also be used to produce non-executable artifacts (e.g., configuration or toolchain artifacts). Coupled with domain-specific modeling, generative techniques provide a way to consolidate toolchains and complex domains into a relatively compact space, by providing generators that produce artifacts in the appropriate semantic domain. This paper describes the motivation and usage of one such environment, in the domain of hybrid systems, as well as discussion and goals for future research. [71] On the Partitioning of Syntax and Semantics For Hybrid Systems Tools J. Sprinkle, A. D. Ames, S. S. Sastry, 44th IEEE Conference on Decision and Control and European Control Conference ECC 2005 (CDC-ECC'05), (submitted for publication), Seville, Spain, Dec., 12--15, 2005. Abstract: Interchange formats are notoriously difficult to finish. That is, once one is developed, it is highly nontrivial to prove (or disprove) generality, and difficult at best to gain acceptance from all major players in the application domain. This paper addresses such a problem for hybrid systems, but not from the perspective of a tool interchange format, but rather that of tool availability in a toolbox. Through the paper we explain why we think this is a good approach for hybrid systems, and we also analyze the domain of hybrid systems to discern the semantic partitions that can be formed to yield a Interchange formats are notoriously difficult to finish. That is, once one is developed, it is highly nontrivial to prove (or disprove) generality, and difficult at best to gain acceptance from all major players in the application domain. This paper addresses such a problem for hybrid systems, but not from the perspective of a tool interchange format, but rather that of tool availability in a toolbox. Through the paper we explain why we think this is a good approach for hybrid systems, and we also analyze the domain of hybrid systems to discern the semantic partitions that can be formed to yield a Foundations of Hybrid and Embedded Systems and Software 64 classification of tools based on their semantics. These discoveries give us the foundation upon which to build semantic capabilities, and to guarantee operational interaction between tools based on matched operational semantics. [72] A Paradigm for Teaching Modeling Environment Design J. Sprinkle, J. Davis, G. Nordstrom, 20th Annual ACM SIGPLAN Conference on ObjectOriented Programming, Systems, Languages, and Applications (OOPSLA), Educators Symposium (Poster Session), ACM, Vancouver, BC, Oct., 24--28, 2004. Abstract: Model-Integrated Computing (MIC) is a generic term for the practice of coupling models of a complex system with the execution of that system. Although MIC has been in use for years by experts who learn its techniques in an ad hoc manner, it is only recently that system modeling-as a science-has begun to be taught as a subject. This paper presents a paradigm in use in a course designed specifically to teach the design of domain-specific modeling environments. The work describes how this paradigm grows throughout the course to complement the expanding nomenclature, mapping technologies, visitor and object-oriented technologies, and design decisions encountered by the students. Model-Integrated Computing (MIC) is a generic term for the practice of coupling models of a complex system with the execution of that system. Although MIC has been in use for years by experts who learn its techniques in an ad hoc manner, it is only recently that system modeling-as a science-has begun to be taught as a subject. This paper presents a paradigm in use in a course designed specifically to teach the design of domain-specific modeling environments. The work describes how this paradigm grows throughout the course to complement the expanding nomenclature, mapping technologies, visitor and object-oriented technologies, and design decisions encountered by the students. [73] Deciding to Land a UAV Safely in Real Time J. Sprinkle, J. M. Eklund, S. S. Sastry, In Proc. American Control Conference (ACC) 2005, (in publication), Portland, OR, Jun., 8--10, 2005. Abstract: The difficulty of autonomous free-flight of a fixedwing UAV is trivial when compared to that of takeoff and landing. There is an even more marked difference when deciding whether or not a UAV can capture or recapture a certain trajectory, since the answer depends on the operating ranges of the aircraft. A common example of requiring this calculation, from a military perspective, is the determination of whether or not an aircraft can capture a landing trajectory (i.e., glideslope) from a certain initial state (velocity, position, etc.). As state dimensions increase, the time to calculate the decision grows exponentially. This paper describes how we can make this decision at flight time, and guarantee that the decision will give a safe answer before the state changes enough to invalidate the decision. We also describe how the computations should be formulated, and how the partitioning of the state-space can be done to reduce the computation time required. Flight testing was performed with our design, and results are given. The difficulty of autonomous free-flight of a fixedwing UAV is trivial when compared to that of takeoff and landing. There is an even more marked difference when deciding whether or not a UAV can capture or recapture a certain trajectory, since the answer depends on the operating ranges of the aircraft. A common example of requiring this calculation, from a military perspective, is the determination of whether or not an aircraft can capture a landing trajectory (i.e., glideslope) from a certain initial state (velocity, position, etc.). As state dimensions increase, the time to calculate the decision grows exponentially. This paper describes how we can make this decision at flight time, and guarantee that the decision will give a safe answer before the state changes enough to invalidate the decision. We also describe how the computations should be formulated, and how the partitioning of the state-space can be done to reduce the computation time required. Flight testing was performed with our design, and results are given. [74] Encoding Aerial Pursuit/Evasion Games with Fixed Wing Aircraft into a Nonlinear Model Predictive Tracking Controller J. Sprinkle, J. M. Eklund, H. J. Kim, S. S. Sastry, IEEE Conference on Decision and Control, pp. 2609--2614 , Dec., 2004. Abstract: Unmanned Aerial Vehicles (UAVs) have shown themselves to be highly capable in intelligence gathering, as well as a possible future deployment platform for munitions. Currently UAVs are supervised or piloted remotely, meaning that their Unmanned Aerial Vehicles (UAVs) have shown themselves to be highly capable in intelligence gathering, as well as a possible future deployment platform for munitions. Currently UAVs are supervised or piloted remotely, meaning that their Foundations of Hybrid and Embedded Systems and Software 65 behavior is not autonomous throughout the flight. For uncontested missions this is a viable method; however, if confronted by an adversary, UAVs may be required to execute maneuvers faster than a remote pilot could perform them in order to evade being targeted. In this paper we give a description of a non-linear model predictive controller in which evasive maneuvers in three dimensions are encoded for a fixed wing UAV for the purposes of this pursuit/evasion game. [75] Toward Design Parameterization Support for Model Predictive Control J. Sprinkle, J. M. Eklund, S. S. Sastry, IEEE 4th International Conference on Intelligent Systems Design and Application, IEEE, IEEE Press, Budapest, Hungary, Aug., 26--28, 2004. Abstract: Research into the autonomous behavior of Unmanned Aerial Vehicles (UAVs) requires concise and dependable specification techniques in order to provide behavioral descriptions for the controllers of these aircraft. Practical issues with autonomous aircraft involve the safety and reliability of the controller (e.g., guarantee of stability), as well as verification of the high-level intention of the autonomous behavior. Since most behaviors are implemented orthogonally to their high-level specification, improvements in the ability to rapidly specify the models that govern the behavior are certainly welcome. This paper describes the framework for decreasing the abstraction required to specify the behavior of an autonomous controller implemented through model predictive control. Research into the autonomous behavior of Unmanned Aerial Vehicles (UAVs) requires concise and dependable specification techniques in order to provide behavioral descriptions for the controllers of these aircraft. Practical issues with autonomous aircraft involve the safety and reliability of the controller (e.g., guarantee of stability), as well as verification of the high-level intention of the autonomous behavior. Since most behaviors are implemented orthogonally to their high-level specification, improvements in the ability to rapidly specify the models that govern the behavior are certainly welcome. This paper describes the framework for decreasing the abstraction required to specify the behavior of an autonomous controller implemented through model predictive control. [76] A Domain-Specific Visual Language for Domain Model Evolution J. Sprinkle, G. Karsai, J. Vis. Lang. and Comp., vol. 15, no. 3-4, pp. 291-307, Jun., 2004. Abstract: Domain-specific visual languages (DSVLs) are concise and useful tools that allow the rapid development of the behavior and/or structure of applications in welldefined domains. These languages are typically developed specifically for a domain, and have a strong cohesion to the domain concepts, which often appear as primitives in the language. The strong cohesion between DSVL language primitives and the domain is a benefit for development by domain experts, but can be a drawback when the domain evolves---even when that evolution appears insignificant. This paper presents a domainspecific visual language developed expressly for the evolution of domain-specific visual languages, and uses concepts from graph-rewriting to specify and carry out the transformation of the models built using the original DSVL. Domain-specific visual languages (DSVLs) are concise and useful tools that allow the rapid development of the behavior and/or structure of applications in welldefined domains. These languages are typically developed specifically for a domain, and have a strong cohesion to the domain concepts, which often appear as primitives in the language. The strong cohesion between DSVL language primitives and the domain is a benefit for development by domain experts, but can be a drawback when the domain evolves---even when that evolution appears insignificant. This paper presents a domainspecific visual language developed expressly for the evolution of domain-specific visual languages, and uses concepts from graph-rewriting to specify and carry out the transformation of the models built using the original DSVL. [77] Using the Hybrid Systems Interchange Format to Input Design Models to Verification & Validation Tools J. Sprinkle, O. Shakernia, R. Miller, S. S. Sastry, IEEE Aerospace Conference, Big Sky, MT, Mar., 2005. Abstract: The domain of hybrid systems lacks the set of mature tools which can reliably apply verification and validation (V&V) techniques for all kinds of systems. Currently, no single tool supports analysis, simulation, verification, validation, and code synthesis of controllers for hybrid systems. As such, it becomes necessary to depend on several tools The domain of hybrid systems lacks the set of mature tools which can reliably apply verification and validation (V&V) techniques for all kinds of systems. Currently, no single tool supports analysis, simulation, verification, validation, and code synthesis of controllers for hybrid systems. As such, it becomes necessary to depend on several tools Foundations of Hybrid and Embedded Systems and Software 66 for analysis of different aspects of the system. This paper , describes the utilization of the definition of a system in more than one toolsuite, while reducing the required effort for the same engineers that design the controllers to interface to the V&V toolsuites. [78] Platform Modeling and Model Transformations for Analysis T. Szemethy, G. Karsai, Journal of Universal Computer Science, vol. 10, no. 10, pp 1383-1406, 2004. Abstract: The model-based approach to the development of embedded systems relies on the use of explicit models in the design process. If these models faithfully represent the components of the system with respect to their properties as well as their interactions, then they can be used to predict the dynamic behavior of the system under construction. In this paper we argue for modeling the execution platform that facilitates the component interactions, and show how models of the application and the knowledge of the platform can be used to translate system configurations into another abstract formalism (timed automata, in our case) that allows system verification through model checking. The model-based approach to the development of embedded systems relies on the use of explicit models in the design process. If these models faithfully represent the components of the system with respect to their properties as well as their interactions, then they can be used to predict the dynamic behavior of the system under construction. In this paper we argue for modeling the execution platform that facilitates the component interactions, and show how models of the application and the knowledge of the platform can be used to translate system configurations into another abstract formalism (timed automata, in our case) that allows system verification through model checking. [79] Introducing Embedded Software and Systems Education and Advanced Learning Technology in an Engineering Curriculum J. Sztipanovits, G. Biswas, K. Frampton, A. Gokhale, L. Howard, G. Karsai, T. J. Koo, X. Koutsoukos, D. Schmidt, ACM Transactions on Embedded Systems, (in publication), 2005. Abstract: Embedded software and systems are at the intersection of electrical engineering, computer engineering, and computer science, with increasing importance in mechanical engineering. Despite the clear need for knowledge of systems modeling and analysis (covered in electrical and other engineering disciplines) and analysis of computational processes (covered in computer science), few academic programs have integrated the two disciplines into a cohesive program of study. This paper describes the efforts conducted at Vanderbilt University to establish a curriculum that addresses the needs of embedded software and systems. Given the compartmentalized nature of traditional engineering schools, where each discipline has an independent program of study, we have had to devise innovative ways to bring together the two disciplines. The paper also describes our current efforts in using learning technology to construct, manage, and deliver sophisticated computer-aided learning modules that can supplement the traditional course structure in the individual disciplines through out-of-class and inclass use. Embedded software and systems are at the intersection of electrical engineering, computer engineering, and computer science, with increasing importance in mechanical engineering. Despite the clear need for knowledge of systems modeling and analysis (covered in electrical and other engineering disciplines) and analysis of computational processes (covered in computer science), few academic programs have integrated the two disciplines into a cohesive program of study. This paper describes the efforts conducted at Vanderbilt University to establish a curriculum that addresses the needs of embedded software and systems. Given the compartmentalized nature of traditional engineering schools, where each discipline has an independent program of study, we have had to devise innovative ways to bring together the two disciplines. The paper also describes our current efforts in using learning technology to construct, manage, and deliver sophisticated computer-aided learning modules that can supplement the traditional course structure in the individual disciplines through out-of-class and inclass use. [80] Experiments on the Decentralized Vibration Control with Networked Embedded Systems T. Tao, K.D. Frampton, Presented at the 2004 ASME International Mechanical Engineering Conference and Exposition, Anaheim CA, November 2004. Abstract: The early promise of centralized active control technologies to improve the performance of large scale, complex systems has not been realized largely due to the The early promise of centralized active control technologies to improve the performance of large scale, complex systems has not been realized largely due to the Foundations of Hybrid and Embedded Systems and Software 67 inability of centralized control systems to "scale up"; that is, the inability to continue to perform well when the number of sensors and actuators becomes large. Now, recent advances in Micro-electro-mechanical systems (MEMS), microprocessor developments and the breakthroughs in embedded systems technologies, decentralized control systems may see these promises through. A networked embedded system consists of many nodes that possess limited computational capability, sensors, actuators and the ability to communicate with each other over a network. The aim of this decentralized control system is to control the vibration of a structure by using such an embedded system backbone. The key attributes of such control architectures are that it be scalable and that it be effective within the constraints of embedded systems. Toward this end, the decentralized vibration control of a simply supported beam has been implemented experimentally. The experiments demonstrate that the reduction of the system vibration is realized with the decentralized control strategy while meeting the embedded system constraints, such as a minimum of inter-node sensor data communication, robustness to delays in sensor data and scalability. [81] Proceedings of the 4th OOPSLA Workshop on Domain-Specific Modeling (DSM'04) J. Tolvanen, J. Sprinkle, M. Rossi, eds., Jyvavaskyla, Finland, 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), University of Jyvavaskyla, Oct., 2004. Abstract: Domain-Specific Modeling aims at raising the level of abstraction beyond programming by specifying the solution directly using domain concepts. In a number of cases the final products can be generated from these high-level specifications. This automation is possible because of domain-specificity: both the modeling language and code generators fit to the requirements of a narrow domain only, often in a single company. This is the fourth workshop on Domain-Specific Modeling, following the encouraging experiences from the earlier workshops at past OOPSLA conferences (Tampa 2001, Seattle 2002 and Anaheim 2003). During the time the DSM workshops have been organized, interest in domain-specific modeling languages, metamodeling and supporting tools has increased greatly. Domain-Specific Modeling aims at raising the level of abstraction beyond programming by specifying the solution directly using domain concepts. In a number of cases the final products can be generated from these high-level specifications. This automation is possible because of domain-specificity: both the modeling language and code generators fit to the requirements of a narrow domain only, often in a single company. This is the fourth workshop on Domain-Specific Modeling, following the encouraging experiences from the earlier workshops at past OOPSLA conferences (Tampa 2001, Seattle 2002 and Anaheim 2003). During the time the DSM workshops have been organized, interest in domain-specific modeling languages, metamodeling and supporting tools has increased greatly. [82] Towards Generation of Efficient Transformations A. Vizhanyo, A. Agrawal, F. Shi, Generative Programming and Component Engineering, 3rd International Conference, October, 2004, LNCS 3286, pp 298-316, 2004. Abstract: In this paper we discuss efficiency related constructs of a graph rewriting language, called Graph Rewriting and Transformation (GReAT), and introduce a code generator tool, which together provide a programming framework for the specification and efficient realization of graph rewriting systems. We argue that the performance problems frequently associated with the implementation of the transformation can be significantly reduced by partial evaluation and adopting language constructs that allow algorithmic optimizations. In this paper we discuss efficiency related constructs of a graph rewriting language, called Graph Rewriting and Transformation (GReAT), and introduce a code generator tool, which together provide a programming framework for the specification and efficient realization of graph rewriting systems. We argue that the performance problems frequently associated with the implementation of the transformation can be significantly reduced by partial evaluation and adopting language constructs that allow algorithmic optimizations. [83] Distributed Control of Thermoelectric Coolers Foundations of Hybrid and Embedded Systems and Software 68 D. G. Walker, K. D. Frampton, R.D. Harvey, In Proc. 9th Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), paper no. 151, Las Vegas, NV, June 1-4, 2004, pp. 361--366. Abstract: Thermoelectric refrigeration has been studied for use in electronics cooling applications. Because of their low efficiency, a significant amount of additional heat is produced that must be removed from the device by passive means. However, even welldesigned passive heat removal systems are faced with physical limitations and can not dissipate additional energy. Therefore, thermoelectric coolers often are not a viable alternative to chip cooling. Distributed control refers to a concept where individual devices operate based on their local condition and that of their nearest neighbors. With this approach a large number of sensors and actuators can be bundled with minimal compute power and communication. In the case of electronics cooling, a thermoelectric cooler can be constructed as many separate coolers, each with its own thermocouple and minimal control system. With this architecture, only those regions that generate heat will be cooled thermoelectrically reducing the impact on the heat removal system and increasing the viability of thermoelectric coolers. Through analytic TEC heat models, the present work seeks to evaluate the possibility of using distributed controlled thermoelectric coolers for microelectronics cooling applications. Results indicate that TEC coolers can provide a 2-fold increase in efficiency when distributed control is used for nonuniformly heated chips. Thermoelectric refrigeration has been studied for use in electronics cooling applications. Because of their low efficiency, a significant amount of additional heat is produced that must be removed from the device by passive means. However, even welldesigned passive heat removal systems are faced with physical limitations and can not dissipate additional energy. Therefore, thermoelectric coolers often are not a viable alternative to chip cooling. Distributed control refers to a concept where individual devices operate based on their local condition and that of their nearest neighbors. With this approach a large number of sensors and actuators can be bundled with minimal compute power and communication. In the case of electronics cooling, a thermoelectric cooler can be constructed as many separate coolers, each with its own thermocouple and minimal control system. With this architecture, only those regions that generate heat will be cooled thermoelectrically reducing the impact on the heat removal system and increasing the viability of thermoelectric coolers. Through analytic TEC heat models, the present work seeks to evaluate the possibility of using distributed controlled thermoelectric coolers for microelectronics cooling applications. Results indicate that TEC coolers can provide a 2-fold increase in efficiency when distributed control is used for nonuniformly heated chips. [84] The Design and Application of Structured Types in Ptolemy Y. Xiong, E.A. Lee, X. Liu, Y. Zhao, L.C. Zhong, IEEE Int. Conf. on Granular Computing (Grc 2005), (in publication), Beijing, China, July 25-27, 2005. Abstract: Ptolemy II is a component-based design and modeling environment. It has a polymorphic type system that supports both the base types and structured types, such as arrays and records. The base type support was reported in [12]. This paper presents the extensions that support structured types. In the base type system, all the types are organized into a type lattice, and type constraints in the form of inequalities can be solved efficiently over the lattice. We take a hierarchical and granular approach to add structured types to the lattice, and extend the format of inequality constraints to allow arbitrary nesting of structured types. We also analyze the convergence of the constraint solving algorithm on an infinite lattice after structured types are added. To show the application of structured types, we present a Ptolemy II model that implements part of the IEEE 802.11 specifications. This model makes extensive use of record types to represent the protocol messages in the system. Ptolemy II is a component-based design and modeling environment. It has a polymorphic type system that supports both the base types and structured types, such as arrays and records. The base type support was reported in [12]. This paper presents the extensions that support structured types. In the base type system, all the types are organized into a type lattice, and type constraints in the form of inequalities can be solved efficiently over the lattice. We take a hierarchical and granular approach to add structured types to the lattice, and extend the format of inequality constraints to allow arbitrary nesting of structured types. We also analyze the convergence of the constraint solving algorithm on an infinite lattice after structured types are added. To show the application of structured types, we present a Ptolemy II model that implements part of the IEEE 802.11 specifications. This model makes extensive use of record types to represent the protocol messages in the system. [85] Dynamic Dataflow Modeling in Ptolemy II Foundations of Hybrid and Embedded Systems and Software 69 G. Zhou, Master's Report, Technical Memorandum No. UCB/ERL M05/2/, University of California, Berkeley, December 21, 2004. Abstract: Dataflow process networks are a special case of Kahn process networks (PN). In dataflow process networks, each process consists of repeated firings of a dataflow actor, which defines a quantum of computation. Using this quantum avoids the complexities and context switching overhead of process suspension and resumption incurred in most implementations of Kahn process networks. Instead of context switching, dataflow process networks are executed by scheduling the actor firings. This scheduling can be done at compile time for synchronous dataflow (SDF) which is a particularly restricted case with the extremely useful property that deadlock and boundedness are decidable. However, for the most general dataflow, the scheduling has to be done at run time and questions about deadlock and boundedness cannot be statically answered. This report describes and implements a dynamic dataflow (DDF) scheduling algorithm under Ptolemy II framework based on original work in Ptolemy Classic. The design of the algorithm is guided by several criteria that have implications in practical implementation. We compared the performance of SDF, DDF and PN. We also discussed composing DDF with other models of computation (MoC). Due to Turing-completeness of DDF, it is not easy to define a meaningful iteration for a DDF submodel when it is embedded inside another MoC. We provide a suite of mechanisms that will facilitate this process. We give several application examples to show how conditionals, data-dependent iterations, recursion and other dynamic constructs can be modeled in the DDF domain. Dataflow process networks are a special case of Kahn process networks (PN). In dataflow process networks, each process consists of repeated firings of a dataflow actor, which defines a quantum of computation. Using this quantum avoids the complexities and context switching overhead of process suspension and resumption incurred in most implementations of Kahn process networks. Instead of context switching, dataflow process networks are executed by scheduling the actor firings. This scheduling can be done at compile time for synchronous dataflow (SDF) which is a particularly restricted case with the extremely useful property that deadlock and boundedness are decidable. However, for the most general dataflow, the scheduling has to be done at run time and questions about deadlock and boundedness cannot be statically answered. This report describes and implements a dynamic dataflow (DDF) scheduling algorithm under Ptolemy II framework based on original work in Ptolemy Classic. The design of the algorithm is guided by several criteria that have implications in practical implementation. We compared the performance of SDF, DDF and PN. We also discussed composing DDF with other models of computation (MoC). Due to Turing-completeness of DDF, it is not easy to define a meaningful iteration for a DDF submodel when it is embedded inside another MoC. We provide a suite of mechanisms that will facilitate this process. We give several application examples to show how conditionals, data-dependent iterations, recursion and other dynamic constructs can be modeled in the DDF domain. 2.3. Project Training and Development As part of setting up CHESS (the new UCB Center for Hybrid and Embedded Software Systems), we have created a CHESS Software Lab, which is focused on supporting the creation of publication-quality software supporting embedded systems design. The lab is physically a room with wireless and wired network connections, a large table for collaborative work, a large format printer (used for UML diagrams and poster preparation), comfortable furniture supporting extended hours of collaborative work, a coffee machine, and a library that inherited a collection of software technology books from the Ptolemy Project. This room is used to promote a local version of the Extreme Programming (XP) software design practice, which advocates pair programming, design reviews, code reviews, extensive use of automated regression tests, and a collaboratively maintained body of code (we use CVS). The room began operation in March of 2003 and has been in nearly constant use for collaborative design work. The principal focus of that work has been on advanced tool architectures for hybrid and embedded software systems design. 2.4. Outreach Activities Continuing in our mission to build a modern systems science (MSS) with profound implications on the nature and scope of computer science and engineering research, the structure of computer science and electrical engineering curricula, and future industrial practice. This new systems science must pervade engineering education throughout the undergraduate and graduate levels. Embedded software and systems represent a major departure from the current, separated Foundations of Hybrid and Embedded Systems and Software 70 structure of computer science (CS), computer engineering (CE), and electrical engineering (EE). In fact, the new, emerging systems science reintegrates information and physical sciences. The impact of this change on teaching is profound, and cannot be confined to graduate level. This year we have continued our work to lay the foundation for a new philosophy of undergraduate teaching at the participating institutions. We also used the summer months to foster appreciation for research in underprivileged and minority students in engineering, by continuing to sponsor and participate in the established REU programs SUPERB-IT at UCB and SIPHER at VU. We continue the collaboration with San Jose State University to continue to develop the undergraduate embedded control course jointly between Berkeley, Vanderbilt and San Jose State University. Prof. Ping Hsu from San Jose State University teaches the class at both Berkeley and San Jose State. 2.4.1. Curriculum Development for Modern Systems Science (MSS) Our agenda is to restructure computer science and electrical engineering curricula to adapt to a tighter integration of computational and physical systems. Embedded software and systems represent a major departure from the current, separated structure of computer science (CS), computer engineering (CE), and electrical engineering (EE). In fact, the new, emerging systems science reintegrates information and physical sciences. The impact of this change on teaching is profound, and cannot be confined to graduate level. Based on the ongoing, groundbreaking effort at UCB, we are engaged in retooling undergraduate teaching at the participating institutions, and making the results widely available to encourage critical discussion and facilitate adoption. We are engaged in an effort at UCB to restructure the undergraduate systems curriculum (which includes courses in signals and systems, communications, signal processing, control systems, image processing, and random processes). The traditional curriculum in these areas is mature and established, so making changes is challenging. We are at the stage of attempting to build faculty consensus for an approach that shortens the pre-requisite chain and allows for introduction of new courses in hybrid systems and embedded software systems. Undergrad Course Insertion and Transfer At many institutions, introductory courses are quite large. This makes conducting such a course a substantial undertaking. In particular, the newness of the subject means that there are relatively few available homework and lab exercises and exam questions. To facilitate use of this approach by other instructors, we have engaged technical staff to build web infrastructure supporting such courses. We have built an instructor forum that enables submission and selection of problems from the text and from a library of submitted problems and exercises. A server-side infrastructure generates PDF files for problem sets and solution sets. The tight integration of computational and physical topics offers opportunities for leveraging technology to illustrate fundamental concepts. We have developed a suite of web pages with applets that use sound, images, and graphs interactively. Our staff has extended and upgraded these applets and created a suite of Powerpoint slides for use by instructors. Foundations of Hybrid and Embedded Systems and Software 71 We have begun to define an upper division course in embedded software (aimed at juniors and seniors). This new course will replace the control course at the upper division level at San Jose State. We also continued to teach at UC Berkeley the integrated course designed by Prof. Lee, which employs techniques discovered in the hybrid and embedded systems research to interpret traditional signals. Course: Structure and Interpretation of Signals and Systems (UCB, EECS 20N) Instructor: Prof. Edward A. Lee Prof. Pravin Varaiya This course is an introduction to mathematical modeling techniques used in the design of electronic systems. Signals are defined as functions on a set. Examples include continuous time signals (audio, radio, voltages), discrete time signals (digital audio, synchronous circuits), images (discrete and continuous), discrete event signals, and sequences. Systems are defined as mappings on signals. The notion of state is discussed in a general way. Feedback systems and automata illustrate alternative approaches to modeling state in systems. Automata theory is studied using Mealy machines with input and output. Notions of equivalence of automata and concurrent composition are introduced. Hybrid systems combine time-based signals with event sequences. Difference and differential equations are considered as models for linear, time-invariant state machines. Frequency domain models for signals and frequency response for systems are investigated. Sampling of continuous signals is discussed to relate continuous time and discrete time signals.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Transatlantic Collaboration on Model-Integrated Computing for Dependable Embedded Components and Systems

This document summarizes early results and experiences of the collaborations between the European project ”Dependable Embedded Components and Systems” (DECOS) (IST-511764) and the NSF ITR Project titled ”Foundations of Hybrid and Embedded Software and Systems”. The collaboration has started with a visit of a researcher from Vienna University of Technology at Vanderbilt University in summer 2005...

متن کامل

Annual Report Foundations of Hybrid and Embedded Systems and Software Nsf/itr Project – Award Number: Ccr-00225610 University of California at Berkeley

s for key publications representing project findings during this reporting period, are provided here. These are listed alphabetically by first author. A complete list of publications that appeared in print during this reporting period is given in section 3 below, including publications representing findings that were reported in the previous annual report. [1] Online Safety Control of a Class o...

متن کامل

Scientific workflow management and the Kepler system

Many scientific disciplines are now data and information driven, and new scientific knowledge is often gained by scientists putting together data analysis and knowledge discovery “pipelines”. A related trend is that more and more scientific communities realize the benefits of sharing their data and computational services, and are thus contributing to a distributed data and computational communi...

متن کامل

Foundations of Hybrid and Embedded Software Systems

• Bella Bollobas (University of Memphis, Mathematics) • Alex Aiken, Electrical Engineering and Computer Sciences (UC Berkeley, EECS) • Ruzena Bajcsy (UC Berkeley, EECS) • Gautam Biswas (Vanderbilt, Computer Sciences) • David Culler (UC Berkeley, EECS) • Kenneth Frampton (Vanderbilt, Mechanical Engineering) • Karl Hedrick, (UC Berkeley, Mechanical Engineering) • Gabor Karsai (Vanderbilt, Electri...

متن کامل

Foundations of Group Signatures: The Case of Dynamic Groups

Recently, a first step toward establishing foundations for group signatures was taken [5], witha treatment of the case where the group is static. However the bulk of existing practical schemesand applications are for dynamic groups, and these involve important new elements and securityissues. This paper treats this case, providing foundations for dynamic group signatures, in the...

متن کامل

Glauber dynamics on trees: Boundary conditions and mixing time†

We give the first comprehensive analysis of the effect of boundary conditions on the mixing time of the Glauber dynamics in the so-called Bethe approximation. Specifically, we show that spectral gap and the log-Sobolev constant of the Glauber dynamics for the Ising model on an n-vertex regular tree with (+)-boundary are bounded below by a constant independent of n at all temperatures and all ex...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005